Printer Friendly

AN EFFICIENT ALGORITHM FOR FINGERPRINT RECOGNITION USING MINUTIAE EXTRACTION.

Byline: N. U. Ain, F. Shaukat, A. S. Nagra and G. Raja

ABSTRACT

Fingerprints have always been considered as basic element for personal recognition. The performance of fingerprint recognition system depends on minutiae which are extracted from raw fingerprint images. In this study, an efficient scheme for fingerprint recognition was proposed. Initially, the input image was enhanced using pre-processing techniques. After image enhancement, image segmentation was performed and minutiae extraction was done using ridge thinning and minutiae marking. To this end, false minutiae removal was done prior to final match. In the proposed scheme, inter ridge distance was finely tuned to improve the overall sensitivity of fingerprint identification which also reduced FAR and FRR considerably. The proposed scheme was evaluated using a dataset of 500 images taken from FVC 2002, FVC 2004 and FVC 2006 and showed better performance as compared to the previous methods.

Keywords: Enhancement, fingerprint recognition, fingerprint verification competition and minutiae.

INTRODUCTION

Fingerprints are categorized as a form of biometrics used to identify a person in order to verify his identity. Because of uniqueness and consistency in fingerprints, fingerprint authentication refers to the verification match between fingerprints (Kumar et al., 2011 and Kudu et al., 2016). It is considered as one of the oldest and most reliable biometric used for personal identification. It can be concluded that fingerprints provide secure and reliable user identification as compared to password, ID-card or key (Sahu et al., 2016). Identification via fingerprints is a renowned method used because of easy data acquisition and ease in accessing various sources (i.e. ten fingers) for identification. They are frequently used by Law Enforcement Agencies and Immigrations Agencies in crime scenes (Cao et al., 2018; Xu et al., 2017).

Fingerprints are one of the biometric features that are unique to every human (Lim et al., 2014). Fingerprints refer to graphical flows that are ridges in human fingers and are formed during infancy. According to research, not even two people have same fingerprints. Even ten fingers of the same individual differ with respect to their corresponding fingerprints (Kumar et al., 2011 and Wahby et al., 2013). The fingerprints are unique with respect to global features including valleys and ridges (Sivaranjani et al., 2015 and Win et al., 2011), local features (Gnanasivam et al., 2010) including ridge endings and bifurcations, called as minutiae (Tiwari et al., 2012).

Several methods have already been proposed for fingerprint recognition. One of the major challenges in fingerprint recognition is the quality of obtained fingerprint images (Conti et al., 2010). These prints can be degraded by various factors including wet, dry, greasy, wounded or scars even creases (Khan et al., 2016).Among various currently available fingerprint matching algorithms mainly, minutiae matching (Tiwari and Sharma, 2012), correlation filters matching (Algarra et al., 2014), transform feature matching, graph matching (Serratosa and Cortes, 2015), genetic algorithms (Silva et al., 2015) and hybrid feature matching along with other global and local methods (Peralta et al., 2015), minutiae based matching is most preferred one (Cao et al., 2015 and Win and Sein 2011). A detailed description of the techniques mentioned above can be found in (Wahby et al., 2013). The review of some of these major techniques is summarized in (Table-1).

In this paper, we have used minutiae based extraction technique for fingerprint matching with enhanced features to increase verification capability of fingerprints. Minutiae based matching algorithm has two main issues: correspondence computation and similarity computation (Gnanasivam et al., 2010). For correspondence based computation all minutia points get assigned with two descriptors: texture descriptor and minutiae descriptor (Sahu et al., 2016). After this, alignment-based matching algorithm is used to establish a level of correspondence among obtained minutiae. Similarly, in similarity based computation, a 17-D feature vector (Wahby et al., 2013) is extracted from matching result, and the result is converted to feature vector using vector classifier.

Moreover, during matching phase of two sets of minutiae (Jie et al., 2006), minutiae template and minutia to be verified are aligned for final matching score (Hou et al., 2012). According to Feng et al. (2008) another important feature in comparing fingerprints is local ridge orientation (Gonzalez, 2009), which is measured with respect to horizontal axis.

MATERIALS AND METHODS

Our proposed system for fingerprint recognition consisted of two major steps: (1) Feature extraction (2) Feature matching (Gayathri et al., 2014 and Shinde et al., 2015). Initially, fingerprint images from user were taken as an input and some pre-processing techniques were applied to the input image. Next, features (minutiae) were extracted and post-processing techniques (false minutiae removal) were applied. Finally, remaining minutiae points were aligned and matching was performed. The proposed scheme for fingerprint recognition system is shown in (Fig-1).

Pre-processing: Initially, image was taken as an input from the user and then pre-processing (Kumar et al., 2011 and Sahu et al., 2016) was performed on the image to improve the quality of image (Wang et al., 2015) for better verification results. In pre-processing phase, histogram equalization (Bana et al., 2011) and Fast Fourier Transform (FFT) were applied to the image. After histogram equalization, image was divided in small processing blocks of 32x32 window size and (1) was applied

(Equation)

Where u and v were horizontal and vertical elements of 32 x 32 matrix and ranged from 0 to 31. In (1) (, )represents the pixel value in spatial domain. F(u, v) is the pixel values in frequency domain obtained after taking FFT.

FF= abs(F(u, v)) = |F(u, v)|

To get final enhanced image (3) was applied. The use of FFT on a set of pixels from a small region of image allows reconnection of broken ridges following the same FFT orientation. To enhance a specific block by its foremost frequencies, it is possible to multiply the FFT of the block by its magnitude a set number of times. This makes the parallel ridges to be finely separated and make the ridges thick.

(Equation)

Where F -1 {F(u, v)} is

(Equation)

In (3) g(x, y) is pixel value obtained after taking inverse Fourier transform of pixel value F(u, v). All other coefficients value same as in case of FFT. Here K referred to constant factor. Adjusting a proper value of K between 0.45 and 0.7 was a good range, depending on the quality of obtained image. Results of histogram equalization and FFT presented in (Fig-2) and (Fig-3) respectively.

Segmentation and enhancement: Image segmentation and enhancement were also a part of pre-processing phase. Here image was binarized and then segmentation was performed after which morphological operations were performed on image for final minutiae extraction (Peralta et al., 2014).

Image binarization

This procedure was performed to transform any 8-bit image to binary image in a way that ridges were assigned 0 and 1 was assigned to furrows (Sahu et al., 2016). Local Adaptive Binarization method was used to get a binary image. When any pixel was selected, specific sensitivity value was assigned to it which was then subtracted from the value of that pixel to have desired range for threshold value. When a new pixel was selected for the next time the same procedure as described above was performed to set a new threshold value range which contained the latest calculation results along with the previously obtained threshold value. Final binarized image was achieved as shown in (Fig-4).

Image segmentation: Segmentation involved partitioning of an image in multiple segments of any desired window size. These segments were actually sets of pixels, also termed as super pixels. Image segmentation was used in locating the objects and to extract the ROI (Region of Interest) with effective ridges and furrows for effective verification (Zhang et al., 2010). For ROI extraction, image was divided into a block size of 16x16 window. After this, block direction of each image was calculated. First, we calculated gradient values () and () for all pixels present in the block. We used sobel filter for this purpose. Image obtained after Block Direction Estimation is shown in (Fig-5).

Next, we found least square approximation for block directions of all blocks using (5).

(Equation)

After this, the blocks without relevant values were deleted using certainty values estimation to define useful blocks. This value was calculated for all blocks using (6).

(Equation)

Here W* W is block size window (i.e. 16x16). After ROI extraction, the extracted image was smoothed using morphological operators (Bansal et al., 2011).

Minutiae extraction: The minutia extraction stage was divided into two steps: (i) Ridge thinning (ii) Minutiae Marking. Ridge thinning was performed to eliminate redundant pixels till the ridges were one pixel wide. After that H breaks, isolated points and spikes were removed for fine minutiae extraction. Thinned image is shown in (Fig-6). Next, image was further divided into 3x3 windows (Bansal et al., 2011) and then minutiae points were marked using Crossing Number Technique. In a 3 x 3 window, if central pixel was 1 and had three '1' value neighbors, then the central pixel was a branch or bifurcation (Sudiro et al., 2012). The bifurcation matrix is shown in (a) part of (Fig-7 a). Alternatively, in a 3 x 3 window, if central pixel was 1 and had just one '1' value neighbor, then the central pixel was a ridge ending (Wahby et al., 2013) or termination (Sudiro et al., 2012).

Termination matrix is shown in (b) part of (Fig- 7 b). Another case was where both the uppermost pixel is 1 and that of the rightmost pixel was also 1. Trifurcation matrix is shown is (c) part of (Fig-7 c).

Minutiae alignment: Next, post-processing was performed on the image. This phase involved two main elements: (i) False Minutiae Removal (ii) Unify Termination Bifurcation. False minutiae points were not removed in preprocessing phase. So, post processing (Kumar et al., 2011 and Sahu et al., 2016) was performed to remove such points to reduce False acceptance rate (FAR) and False rejection rate (FRR) of an image. To remove such points, we calculated the inter ridge distance (D) using (7).

Inter Ridge Distance = sum all pixels with value 1 / row length

(Fig-8) shows the minutiae marking and (Fig-9) shows the false minutiae points (Sahu et al., 2016). After removal of false minutiae points, left over minutia points were marked as shown in (Fig-10). Finally, termination and bifurcations were unified through (8) and the following algorithm:

tan-1 = (sy - ty/sx - tx)

(i) Track a ridge segment, whose starting point must be the termination and length were D.

(ii) Then sum up all the x-coordinates of points present in that particular ridge segment.

(iii) After that to get "sx" we divided the above summation with D and sequentially we obtained "sy" using the same technique.

Match: Finally, remaining minutiae points were aligned and final matching (Algarra et al., 2014) was performed to check the percent match between suspected image with the templates stored in the database (Feng et al., 2008; Mohsen et al., 2004). (Fig-11) shows the result of feature matching. The ridge associated with each minutia was represented as a series of x-coordinates ( x1, x2, ... ... . . xn) of the points on the ridge. Similarity of correlating the two ridges was derived from (9)

(Equation)

Here xi represented the reference minutiae points stored in the template and Xi were the points of the image to be verified. Here S should be greater than 0.8. (10) was used to compute match score:

Match Score = number of matched minutiae pair / number of minutiae template

Table 1: Review of current techniques, N/A means not available

Authors###Technique###FAR, FRR, VRR###D/B used###Accuracy

(Liu and Cao 2012)###Gabor filter###FAR = 0.085%###2000 fingerprint###N/A

(Liu and Cao, 2012)###based Enhancement###FRR = 1.4%###images of 200 individuals at

###and Crossing Number###VR = 99.75%###500dpi size: 256x360

###Concept for

###Minutiae Extraction

(Afsar et al. 2004)###Gabor filter###FAR = 1%###FVC 2000###High

(Afsar, Arif and###based Enhancement###FRR = 7%###800 fingerprints###(92%)

Hussain, 2004)###and Crossing Number###EER = 5%###from 110 different

###Concept for###fingers

###Minutiae Extraction

(Ishpreet et al.###Histogram Equalization###FAR = 0.06%###FVC2000###N/A

2012) (Ishpreet###for enhancement###FRR = 6.9%

Singh Virk and###and Crossing Number

Raman Maini, no date)###Concept for

###Minutiae Extraction

(Atul S. Chaudhari 2014)###Minutiae base###FRR = 0.23%###FVC 2000###High

###identification using###FAR = 0%###Size: 260x300

###Crossing Number Concept

Table-2: Performance analysis of proposed scheme.

Sr. No###Threshold###Proposed Scheme

###FAR###FRR

###1###7###0.019###3.01

###2###8###0.007###5.50

###3###9###0.001###6.07

###4###10###0.000###7.08

RESULTS AND DISCUSSION

The proposed method is implemented in MATLAB and the results are presented in this section. To evaluate our proposed system, a database of 500 images was taken from Fingerprint Verification Competition (FVC) (Lim et al., 2014; Maio et al., 2004) database and National Institute of Standards and Technology (NIST) database (Jain et al., 2010). The standard evaluation metrics, False acceptance rate and False rejection rate (Kumar et al., 2011) were used to report the results which can be calculated as:

(%)FAR = (FA/N) * 100

(%)FRR = (FR/N) * 100

Where N was total number of samples, False accpetance was number of false acceptance incidents, False rejection was number of false rejection incidents. We evaluated the FAR and FRR at different threshold levels. The threshold value was varied from 7 to 10. The proposed system achieved minimum FAR and FRR of 0 and 3.01 respectively for the dataset of 500 images. The results were summarized and can be found in (Table 2). From the review of existing methods, we found that it is very hard to compare the results with the previously published work because of their non-uniform performance metrics and non-standard datasets. Yet we tried to compare our results with the previous techniques which used the standard performance metrics and evaluation criteria for their proposed systems. We compared our results with Kumar et al. (Kumar et al., 2011), which provided the FAR and FRR values at the same threshold values.

Kumar et al. (Kumar et al., 2011) achieved minimum values of FAR and FRR of 0.00 and 7.12 respectively which showed the supremacy of our proposed system. In addition, we compared our proposed system with Afsar et al. (Afsar et al., 2004), Ishpreet et al. (Ishpreet et al., 2012) and Chaudhari et al. (Chaudhari et al., 2014). Afsar et al. (Afsar et al., 2004) evaluated their system on the dataset of 800 images and claimed a high accuracy with the values of FAR and FRR of 1% and 7% respectively. Similarly, Ishpreet et al. (Ishpreet et al., 2012) evaluated their system using FVC 2000 dataset and achieved the FAR and FRR values of 0.06 and 6.9 respectively. Finally, Chaudhari et al. (Chaudhari et al., 2014) in 2014 proposed a system which achieved the values of FAR and FRR of 0.00 and 0.23 respectively but the system's limitation in terms of image size limited the generalization of results.

By comparing our proposed system with these systems, our proposed method achieved better FAR and FRR values on sufficiently large dataset which makes our system robust and efficient.

Conclusion: In this study, we proposed an efficient scheme for fingerprint recognition using minutiae extraction techniques. Accuracy of the proposed system was improved to 80% as compared to the previous methods.

REFERENCES

Afsar, F., M. Arif and M. Hussain (2004). Fingerprint identification and verification system using minutiae matching. Nat. conf. on emer. Tech. NCET , 141-146.

Algarra, M., Radotic, K., Kalauzi, A., MutavdA3/4ic, D., Savic, A., Jimenez-Jimenez, J., Rodriguez-Castellon, E., da Silva, J.C.E. and Guerrero-Gonzalez, J.J., (2014). Fingerprint detection and using intercalated CdSe nanoparticles on non-porous surfaces. Analy. Chim. Ac., 812: 228-235.

Bana, S. and D. Kaur (2011). Fingerprint Recognition using image segmentation. Int. J. adv. eng. Sci. tech. 5(1): 012-023.

Bansal, R., P. Sehgal, and P. Bedi (2011). Minutiae extraction from fingerprint images - A review. Int. J. com. sci. 8(5): 74-85.

Cao, K., Yang, X., Tao, X., Li, P., Zang, Y. and Tian, J. (2010). Combining features for distorted fingerprint matching. J. net. Com. app. 33(3): 258-267.

Cao, K. and A.K. Jain (2015). Learning fingerprint reconstruction: From minutiae to image. IEEE Trans. on info. foren. and sec., 10(1): 104-117.

Cao, K. and A.K. Jain (2018). Automated latent fingerprint recognition. IEEE Tran. pattern analy. mach. Intel.

Conti, V. et al., (2010). Introducing Pseudo-Singularity points for efficient fingerprints classification and recognition. In CISIS 2010 - The 4th Intl. Conf. Compl. Int. Soft. Intensive Sys. 368-375.

Chaudhari, A.S., G.K. Patnaik, and S.S. Patil (2014). Implementation of minutiae based fingerprint identification system using crossing number concept. Info. econ. 18(1):17-25.

Feng, J. (2008). Combining minutiae descriptors for fingerprint matching. Patt. Recog., 41(1): 342-352.

Gayathri, S. and V. Sridhar (2014). ASIC implementation of image enhancement technique for Fingerprint recognition process. Intl. Conf. Conte. Comp. and Info. 868-873.

Gnanasivam, P. and S. Muttan (2010). An efficient algorithm for fingerprint preprocessing and feature extraction. In pr. comp. sci. 133-142.

Gonzalez, R.C. (2009). Digita Image Processing 3rd ed. I. Pearson Education, ed.

Hou, Z. et al., (2012). A variational formulation for fingerprint orientation modeling. Patt. Recog. 45(5): 1915-1926.

Ishpreet, S.V. and M. Raman (2012). Fingerprint image enhancement and minutiae matching in fingerprint verification. J. Comp. Tech., 1.

Jain, A.K., J. Feng, and K. Nandakumar (2010). Fingerprint matching. IEEE comp. soc. 43(2).

Jie, Y. (2006). Fingerprint minutiae matching algorithm for real time system. Patt. Recog. 39(1): 143-146.

Khan, M.A.U., T. M. Khan, D. G. Bailey, and Y. Kong (2016). A spatial domain scar removal strategy for fingerprint image enhancement. Patt. Recog. 60: 258-74.

Kudu, N. (2016). Biometric Identification System using Fingerprint and Knuckle as Multimodality Features Neha. Intl. Conf. on Elect. Electr. and Opt. Tech. 3279-3284.

Kumar, K., B. Kumar and D. Kumar (2011). Fingerprint recognition using minutiae extraction. IEEE Intl. Conf. on ICT-Initiatives.

Lim, J.F. and R.K.Y. Chin (2014). Enhancing fingerprint recognition using minutiae-based and image-based matching techniques. Proceedings - 1st Intl. Conf. Art. Intell. Model. Sim. AIMS 2013, 261-266.

Liu, M., S. Liu, and Q. Zhao (2014). Fingerprint orientation field reconstruction by weighted discrete cosine transform. Info. sci. 268: 65-77.

Maio, D. et al. (2004). Fvc2004: Third fingerprint verification competition. Bio. auth. 24(3): 1-7.

Mohsen, S.M., S.M.Z. Farhan and M.M.A. Hashem (2004). Automated fingerprint recognition using minutiae matching technique for the large fingerprint database. 3rd Intl. Conf. Elec. Comp. Eng. ICECE 2004, 28-30.

Peralta, D. (2014). Fast fingerprint identification for large databases. Patt. Recognition, 47(2): 588-602.

Peralta, D., M. Galar, I. Triguero, D. Paternain, S. Garcia, E. Barrenechea, J. M. Benitez, H. Bustince, and F. Herrera. (2015). A survey on fingerprint mnutiae-based local matching for verification and identification: Taxonomy and experimental evaluation. Info. Sci. 315: 67-87.

Sahu, D. and R. Shrivas (2016). Fingerprint reorganization using minutiae based matching for identification and verification. , 5(5): 1710-1715.

Serratosa, F., and X. Cortes (2015). Interactive graph-matching using active query strategies. Patt. Recog. 48 (4): 1364-73.

Silva, A G A., I. A. M. Barbosa, R. L. Parente, L. V. Batista, J. J. B. Primo, A. S. Marinho, and P. Alves (2015). Analysis of a genetic algorithm-based approach in the optimization of the sourceAFIS's matching algorithm. In Intl. Conf. on Sci. Comp., 23-28.

Shinde, A.S. and V. Bendre (2015). An embedded fingerprint authentication system. Proceedings - 1st Intl. Conf. Comp. Comm. Cont. Auto. ICCUBEA 2015, 205-208.

Sivaranjani, S. (2015). Footprint feature extraction on raspberry Pi . ICIIECS , pp.1-6.

Sudiro, S. and R. Yuwono (2012). Adaptable fingerprint minutiae extraction algorithm based-on crossing number method for hardware implementation using FPGA device. Intl. J. com. sci. eng. info. tech. (IJCSEIT). 2(3).

Tiwari, S. and N. Sharma (2012). Q-Learning approach for minutiae extraction from fingerprint image. Procedia Technology, 6: 82-89.

Wahby, S., M.A. and M. O. Ahmad (2013). A multilevel structural technique for fingerprint representation and matching. Signal Processing, 93(1): 56-69.

Wang, J.W., Le, N.T., Wang, C.C. and Lee, J.S. (2015). Enhanced ridge structure for improving fingerprint image quality based on a wavelet domain. IEEE Signal Process. Lett. 22(4), 390-394.

Win, Z.M. and M.M. Sein (2011). Fingerprint recognition system for low quality images. SICE Annual Conference 2011, pp.1133-1137.

Xu, M., Feng, J., Lu, J. and Zhou (2017). Latent fingerprint enhancement using Gabor and minutia dictionaries. In Img. Proc. (ICIP), 2017 IEEE Intl. Conf. 3540-3544.

Zhang, K. (2010). Study on the embedded fingerprint image recognition system. In Proceedings - 2010 Intl. Conf. Info. Sci. Man. Eng., ISME 2010: 169-172.
COPYRIGHT 2018 Asianet-Pakistan
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2018 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Publication:Pakistan Journal of Science
Article Type:Report
Date:Jun 30, 2018
Words:3709
Previous Article:INTER-RELATIONSHIP OF DROUGHT RELATED MORPHO-PHYSIO TRAITS IN WHEAT GENOTYPES.
Next Article:INTELLIGENT VIDEO ENCODER SELECTION SYSTEM FOR SMART DEVICES.
Topics:

Terms of use | Privacy policy | Copyright © 2022 Farlex, Inc. | Feedback | For webmasters |