Printer Friendly

Pupil Segmentation Using Orientation Fields, Radial Non-Maximal Suppression and Elliptic Approximation.

I. INTRODUCTION

Iris recognition systems have been widely developing for biometrics to personal identification because of its high reliability and accuracy [1-3]. Iris recognition is more accurate and reliable than fingerprint recognition and face recognition, because the individual's iris pattern is complex, unique and stable. Iris recognition is performed through the following steps: iris image capturing, preprocessing, iris segmentation, iris normalization, feature extraction and feature matching, sequentially [4,5]. In these steps, iris segmentation is an essential step in iris recognition because false segmentation may lead to improper feature extraction [6-9]. Especially, a small error near pupils may be fatal since the most significant and meaningful information of iris is distributed in these regions. Therefore, it is very important to correctly estimate the inner boundaries of irises as shown in Fig. 1, where the normalized image using circle estimation has the pupil textures so small amount of errors in estimating the inner boundaries can be critical.

To detect iris inner boundaries, various methods, such as integro-differential operator [10,11], edge detection [1,8,12-14], Hough transform [15,16], active contour [15,17], and so forth, are commonly used. Iris segmentation methods based on integro-differential operator, edge detection, Hough transform and active contour may lead improper segmentation owing to features of less discriminative regions such as sclera, eyelids, eyelashes, pupil, reflection, and so forth [18]. Especially, pupil regions may be shown as bright gray color or small reflection region by the various IR (infra-red) illuminations.

The detected boundaries are estimated by circular or elliptic models in the most of the iris recognition systems. However, weak edges, various illuminations and specular make it difficult to estimate the parameters, describing the circles or ellipses. In recent years, the methods based on active contour have been widely used; however some methods should be manually set the initial mask, and these methods may falsely segment for unsharp or low contrast iris images.

In this paper, to develop accurate pupil segmentation method for various illumination and acquired conditions, orientation fields formed through gradient information are used to detect the initial center of the pupil. After that, radial non-maximal suppression and boundary fitting are adapted to estimate the boundary shapes of pupils. Additionally, an elliptical model is used, where the parameters are estimated by a novel approximation method.

II. DETECTION OF THE INITIAL PUPIL CENTER

The proposed pupil segmentation method is performed by the following steps: 1) an initial pupil center is detected using orientation fields, 2) edges are suppressed and detected using the direction toward the initial pupil center, and 3) the elliptic pupil boundary is fit by radius-updating, center-shifting, and ROI (region of interest) shrinking until the consecutive error is sufficiently small as shown in Fig. 2.

To estimate the local orientation field, we use gradient information. A tangential direction for each pixel is computed using the horizontal and vertical gradients as follows:

[alpha] = [tan.sup.-1]([G.sub.y]/[G.sub.x]), (1)

where [G.sub.x] and [G.sub.y] are the horizontal and vertical gradients calculated by Sobel operator.

To make the orientation fields, 8x8 non-overlapped blocks are used; the block orientation is calculated using the average of the doubled orientation, which is high efficient and robust for noisy samples [19,20], given by

[mathematical expression not reproducible] (2)

where i is the pixel index within a block and [[alpha].sub.i] is the tangential direction for the i th pixel. To accurately compute the orientation fields, pixels with low gradient magnitudes are not used and their reliabilities are set to zero. For the remaining pixels with non-zero reliabilities, reliabilities are computed using the Pythagorean trigonometric identity. For the same angle, the sum of squares of sine and cosine equals to one, and the sum for different angles cannot be greater than two. So we calculate the reliability of orientation as follows:

[mathematical expression not reproducible] (3)

where n is the number of pixels within a block. An example of orientation fields can be seen in Fig 3, where the tangential directions of the pupil, iris, and eyelids can be clearly observed.

To detect an initial center, a center map is generated by voting a potential center with normal directions ([alpha]-[pi]/2 and [alpha] + [pi]/2) and a possible radius range for blocks with reliability values greater than a threshold. And the local maxima greater than a minimum voting threshold, are selected as the initial center candidates. For candidates with voting values greater than 90% of the highest voting value, the pixel with greater symmetry and an average brightness less than a threshold is selected as the initial center, where the symmetry of the pixel ([x.sub.0], [y.sub.0]) is calculated by

[mathematical expression not reproducible] (4)

where I is the grey-level intensity ranging [0,255], s is the search range, and N is the number of pixels used in the summation. To reduce the effect of illumination-reflected pixels, pixels with intensity values greater than a threshold are not used for the symmetry computation. Examples of a pupil center map and initial center detection are shown in Fig. 3, where the blue numeric values show the candidate index, symmetry, and average brightness from left to right.

III. RADIAL NON-MAXIMAL SUPPRESSION

To detect the pupil boundaries, both vertical and horizontal gradient values are computed, and radial non-maximal suppression is performed using the following two suppression criteria:

|[theta](x,y)-[[theta].sub.0](x,y)|>[tau], (5)

(|G(x,y)|<|G(x + [delta]cos[[theta].sub.0],y + [delta]sin[[theta].sub.0])|) OR

(|G(x,y)|<|G(x-[delta]cos[[theta].sub.0],y-[delta]sin[[theta].sub.0])|), (6)

where [theta](x,y) is the normal direction of a gradient at (x,y), [[theta].sub.0](x,y) is the radial direction from the initial center to (x,y), [tau] is a threshold, and |G(x,y)| is the gradient magnitude. Because we use digital images, ([delta]cos[[theta].sub.0],[delta]sin[[theta].sub.0]) is applied as eight types: (-1,-1), (-1,0), (-1,+1), (0,+1), (+1,+1), (+1,0), (+1,-1), and (0,-1). Additionally, connected component labeling is applied to remove small connected edges in the suppressed edges. An example is shown in Fig. 3, where non-zero dark pixels are the pixels removed by the labeling.

IV. PUPIL SEGMENTATION

An initial pupil radius is detected from a radial histogram of the edges, which is a discrete function H[r] = [n.sub.r], where r is the distance from the pupil center and n is the number of edge pixels having distance r. And the initial pupil radius R.sub.0 is determined as the nearest local maximum of the smoothed radial histogram H from r = 0. An example is shown in Fig. 3, where the magenta circle is the initial pupil formed from the initial center and radius.

To accurately detect pupils, we use radius-updating, center-shifting, and ROI shrinking by adjusting the radius and center of a circular model as follows

[mathematical expression not reproducible] (7)

[mathematical expression not reproducible] (8)

[mathematical expression not reproducible] (9)

where [R.sub.t] is the average radius of the pupil at updating time t, [C.sup.x.sub.t] and [C.sup.y.sub.t] are the x and y coordinates of the pupil center, respectively, e(k) is the kth edge position, [e.sub.x](k) and [e.sub.y](k) are the x and y coordinates of the edge, respectively, [N.sub.e] is the number of edges, Dist(A,B) is the distance between A and B, and [[theta].sub.c](k) is the direction of the kth edge to the center.

To improve the accuracy of pupil segmentation, an elliptic model as shown in Fig. 4 is used to approximate the ellipse as follows: 1) the average of the distance to the center for every point on the curve is R, 2) the average of the distance to the center for points whose distance is greater than R is [R.sup.+], 3) the average of the distance to the center for points whose distance is less than R is [R.sup.-], 4) [R.sup.+] is approximated by the weighted average of the semi major axis (a) and R, 5) [R.sup.-] is approximated by the weighted average of the semi minor axis (b ) and R, and 6) the rotation direction ([[theta].sub.p]) is approximated by the average orientation of points whose distance to the center is greater than R.

By applying the above approximations, a and b can be estimated by

[a.sub.t] = [R.sup.+.sub.t] - (1-k)[R.sub.t]/k, (10)

[b.sub.t] =[R.sup.-.sub.t]-(1-k)[R.sub.t]/k, (11)

where k is an approximation weight, we use k = 0.5, and [[theta].sub.p] is estimated by

[mathematical expression not reproducible] (12)

where [[alpha].sub.k] is the orientation of points whose distance to the center is greater than R.

An initial ROI is used as the maximum dilation of the initial boundary, the ROI is shrunk by reducing the dilation mask, and the ROI at updating time t is the dilation region of the pupil boundary at time t-1. Updating is terminated when the consecutive difference values for radius and center are smaller than a predefined error threshold.

V. EXPERIMENTAL RESULTS

To evaluate the proposed method, we tested the proposed method using two commonly used datasets: CASIA-Interval v3.0 (CASIA v3.0 dataset, Institute of Automation Chinese Academy of Sciences) and IITD (Indian Institute of Technology Delhi).

A procedure example of the proposed method is shown in Fig. 5, where the initial center of the pupil is not correctly detected, but the final pupil boundary is corrected by adjusting the radius. In Fig. 6, the repetitive adjusting processing of radius-updating, center-shifting, and ROI shrinking is shown, where the white circle of the top image shows the initial circle and the bottom images show the first and second repetitions, respectively, it is adjusted with two repetitions in this case; however ten or more repetitions are needed for most cases.

The results of the proposed method on CASIA-Interval v3.0 are shown in Figs. 7 and 8, where most images are correctly segmented as shown in Fig. 7, while the three worst results are shown in Fig. 8. For all 2639 images of CASIA-Interval v3.0, only two images resulted in large errors shown in the two bottom results of Fig. 8 and all pupils were detected without missing. Comparisons between circle pupils and ellipse pupils are shown in Fig. 9, where circle boundaries have some large uncovered areas.

Objective performance was measured by the uncovered area (UC), over-covered (OC), and false-covered (FC) ratios as reported in [8] and [19], and the results were compared with four other methods: Monro [13] (circle pupil), Huang [8] (radial suppression, irregular shape), Miyazawa [17] (ellipse pupil), and Krishnamoorthi [14] (orthogonal polynomials, irregular shape). As shown in Table I, the most accurate method is the proposed method. The results of the proposed method using circular pupils were even more accurate than the Monro and the Miyazawa methods. Although the comparison test using same 200 sample images was not performed, the replicas' performances for the Monro, the Huang and the Miyazawa methods may be decreased. Therefore, we compared the performances for the worst 200 images and the best images as shown in Table I. Even though the results for the worst 200 images are not more accurate than the results of the Huang and the Krishnamoorthi's methods, the detection hit ratio of our method is 100% in all 2639 images.

The results of the proposed method on IITD are shown in Fig. 10 and Table II. As shown in the two bottom images of Fig. 10, defocused image and irregularly-shaped pupils even resulted in acceptable detection.

The average processing time of the proposed methods tested on a PC (Intel, i7-4790 CPU @ 2.60 GHz) is shown in Table III. Because a large amount of information is calculated for pupil segmentation, the processing time including iris limbus boundary segmentation, eyelid detection and normalization may take less than 150 ms.

All CASIA-Interval v3.0 and IITD results and some other results of CASIA-Thousand and CASIA-Lamp, where these some results are shown in Fig. 11, can be found at the website; https://sites.google.com/site/khuaris/home/pupil-segmentation.

In addition, the limbus boundary segmentation and normalization are performed to show the ability capable of being used in iris recognition systems as shown in Figs. 12 and 13. The limbus boundary segmentation is performed by the same manner with the pupil segmentation. In the results, the limbus boundaries of irises are well segmented for unclear limbus images in Fig. 12 and the normalized images do not have the pupil textures in Fig. 13. Therefore, the proposed method can be used for iris recognition systems with high performances.

To evaluate more general conditions, we tested on variable challengeable conditions, which were reported in [21], shown in Figs. 14 and 15, where various size pupils were correctly segmented, and non-visible pupils such as closed eyes or partially visible pupils were not segmented because of the minimum voting threshold for detecting an initial pupil center.

VI. CONCLUSION

We have proposed a novel method of pupil segmentation for iris segmentation and recognition. To accurately detect the initial pupil center, orientation fields were used. By this accurate initial center, radial non-maximum edges were successfully suppressed. Pupil boundaries were accurately detected as elliptic models using a novel elliptic approximation. Therefore, the proposed method could be widely used for iris recognition systems with a high degree of accuracy.

REFERENCES

[1] L. Ma, T. Tan, Y. Wang, and D. Zhang, "Efficient iris recognition by characterizing key local variations," IEEE. Trans. Imag. Proc., vol. 13, no. 6, pp. 739-750, 2004. doi:10.1109/TIP.2004.827237

[2] A. Jain, R. Bolle and S. Pankanti, "Biometrics: Personal Identification in a Networked Society," Springer US, New York, 2006.

[3] D. Zhang, Automated Biometrics, "Technologies and Systems," Springer US, New York, 2000.

[4] J. Daugman, "How iris recognition works," IEEE Trans. Circ. Syst. for Vid. Techn., vol. 14, no. 1, pp. 21-30, 2004. doi:10.1109/TCSVT.2003.818350

[5] Y.H. Li and P.J. Huang, "An accurate and efficient user authentication mechanism on smart glasses based on iris recognition," Mobile Inform. Syst., vol. 2017, 1281020, 2017. doi:10.1155/2017/1281020

[6] H. Proenca and L.A. Alexandre, "Introduction to the special issue on the segmentation of visible wavelength iris images captured at-a-distance and on-the-move," Image Vis. Comput., vol. 28, no. 2, pp. 213-214, 2010. doi:10.1016/j.imavis.2009.09.004

[7] N.B. Puhan, N. Sudha, and A.S. Kaushalram, "Efficient segmentation technique for noisy frontal view iris images using Fourier spectral density," Sign. Imag. Video Proc., vol. 5, no. 1, pp. 105-119, 2011. doi:10.1007/s11760-009-0146-z

[8] J. Huang, X. You, Y.Y. Tang, L. Du, and Y. Yuan, "A novel iris segmentation using radial-suppression edge detection," Sign. Proc., vol. 89, no. 12, pp. 2630-2643, 2009. doi:10.1016/j.sigpro.2009.05.001

[9] A. Radman, N. Zainal, ans S.A. Suandi, "Automated segmentation of iris images acquired in an unconstrained environment using HOG-SVM and GrowCut," Digit. Sign. Proc., vol. 64, pp. 60-70, 2017. doi:10.1016/j.dsp.2017.02.003

[10] J. Daugman, "The importance of being random: statistical principles of iris recognition," Patt. Recogn., vol. 36, no. 2, pp. 279-291, 2003. doi:10.1016/S0031-3203(02)00030-4

[11] C.L. Tisse, L. Martin, L. Torres, and M. Robert, "Person identification technique using human iris recognition," in Proc. 15th Int. Conf. Vision Interface, Hong Kong, China, 2002, pp. 294-299.

[12] J. Huang, Y. Wang, T. Tan, and J. Cui, "A new iris segmentation method for recognition," in Proc. 17th Int. Conf. Pattern Recognition, Cambridge, UK, 2004, pp. 554-557. doi:10.1109/ICPR.2004.1334589

[13] D.M. Monro, S. Rakshit, and D Zhang, "DCT-based iris recognition," IEEE Trans. Patt. Analys. Mach. Intell., vol. 29, no. 4, pp. 586-595, 2007. doi:10.1109/TPAMI.2007.1002

[14] R. Krishnamoorthi and G. Annapoorani, "A simple boundary extraction technique for irregular pupil localization with orthogonal polynomials," Comp. Vis. Imag. Underst., vol. 116, no. 2, pp. 262-273, 2012. doi:10.1016/j.cviu.2011.10.002

[15] J. Koh, V.Govindaraju, and V. Chaudhary, "A robust iris localization method using an active contour model and Hough transform," in Proc. 20th Int. Conf. Pattern Recognition, Istanbul, Turkey, 2010, pp. 2852-2856. doi:10.1109/ICPR.2010.699

[16] S. Shah, "Iris segmentation using geodesic active contours," IEEE Trans. Inform. For. Sec., vol. 4, no. 4, pp. 824-836, 2009. doi:10.1109/TIFS.2009.2033225

[17] K. Miyazawa, K. Ito, T. Aoki, K Kobayashi, and H. Nakajima, "An effective approach for iris recognition using phase-based image matching," IEEE Trans. Patt. Analys. Mach. Intell., vol. 30, no. 10, pp. 1741-1756, 2008. doi:10.1109/TPAMI.2007.70833

[18] Z.Z. Abidin, et al., "Iris segmentation analysis using integro-differential operator and Hough transform in biometric system," J. Telec. Electr. Comput. Eng., vol. 4, no. 2, pp. 41-48, 2012

[19] L. Hong, Y.Wan, and A. K. Jain, "Fingerprint image enhancement: algorithm and performance evaluation," IEEE Trans. Patt. Analys. Mach. Intell., vol.20, no. 8, pp. 777-789, 1998. doi:10.1109/34.709565

[20] M. Liu, X. Jiang, and A.C. Kot, "Fingerprint reference-point detection," EURASIP J. Appli. Sign. Proc., vol. 5, pp. 498-509, 2005. doi:10.1155/ASP.2005.498

[21] M. Tonsen, X. Zhang, Y. Sugano, and A. Bulling, "Labelled pupils in the wild: a dataset for studying pupil detection in unconstrained environments," in Proc. ACM Int. Symp. Eye Tracking Research & Applications, SC, USA, 2016, pp. 139-142. doi:10.1145/2857491.2857520

SeungGwan LEE (1), Daeho LEE (1), Youngtae PARK (2)

(1) Humanitas College, Kyung Hee University, 17104, Republic of Korea

(2) Department of Electronics Engineering, Kyung Hee University, 17104, Republic of Korea nize@khu.ac.kr

Digital Object Identifier 10.4316/AECE.2019.02009
TABLE I. COMPARISON WITH OTHER METHODS ON CASIA-INTERVAL V3.0.

Method           OC    UC    FC    Note

Monro [13]       2.36  2.73  3.24  For 200 selected
                                   images
Miyazawa [17]    0.82  0.77  1.03  For 200 selected
                                   images
Huang [8]        0.16  0.18  0.21  For 200 selected
                                   images
Krishnamoorthi   0.15  0.17  0.21  For 100 selected
[14]                               images
Proposed         0.13  0.21  0.32  For all 2639 images
(circle pupil)
Proposed         0.09  0.07  0.15  For all 2639 images
(ellipse pupil)
Proposed         0.23  0.20  0.40  For the worst 200
(ellipse pupil)                    images
Proposed         0.04  0.04  0.07  For the best 200
(ellipse pupil)                    images

TABLE II. PUPIL SEGMENTATION PERFORMANCE ON IITD (FOR 2108 IMAGES
EXCLUDING PUPILS COVERED BY EYELIDS).

Method                    OC    UC    FC

Proposed (circle pupil)   0.06  0.34  0.38
Proposed (ellipse pupil)  0.05  0.20  0.23

TABLE III. AVERAGE PROCESSING TIME (MS) FOR THE PROPOSED METHOD

Dataset             CASIA (320x280)  IITD (320x240)

Average processing  77.67            66.06
time
COPYRIGHT 2019 Stefan cel Mare University of Suceava
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2019 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Lee, SeungGwan; Lee, Daeho; Park, Youngtae
Publication:Advances in Electrical and Computer Engineering
Date:May 1, 2019
Words:3188
Previous Article:Reassigned Short Time Fourier Transform and K-means Method for Diagnosis of Broken Rotor Bar Detection in VSD-fed Induction Motors.
Next Article:A Diagonally Weighted Binary Memristor Crossbar Architecture Based on Multilayer Neural Network for Better Accuracy Rate in Speech Recognition...
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters