Open Access
3 April 2013 Enhanced iris recognition method based on multi-unit iris images
Author Affiliations +
Abstract
For the purpose of biometric person identification, iris recognition uses the unique characteristics of the patterns of the iris; that is, the eye region between the pupil and the sclera. When obtaining an iris image, the iris’s image is frequently rotated because of the user’s head roll toward the left or right shoulder. As the rotation of the iris image leads to circular shifting of the iris features, the accuracy of iris recognition is degraded. To solve this problem, conventional iris recognition methods use shifting of the iris feature codes to perform the matching. However, this increases the computational complexity and level of false acceptance error. To solve these problems, we propose a novel iris recognition method based on multi-unit iris images. Our method is novel in the following five ways compared with previous methods. First, to detect both eyes, we use Adaboost and a rapid eye detector (RED) based on the iris shape feature and integral imaging. Both eyes are detected using RED in the approximate candidate region that consists of the binocular region, which is determined by the Adaboost detector. Second, we classify the detected eyes into the left and right eyes, because the iris patterns in the left and right eyes in the same person are different, and they are therefore considered as different classes. We can improve the accuracy of iris recognition using this pre-classification of the left and right eyes. Third, by measuring the angle of head roll using the two center positions of the left and right pupils, detected by two circular edge detectors, we obtain the information of the iris rotation angle. Fourth, in order to reduce the error and processing time of iris recognition, adaptive bit-shifting based on the measured iris rotation angle is used in feature matching. Fifth, the recognition accuracy is enhanced by the score fusion of the left and right irises. Experimental results on the iris open database of low-resolution images showed that the averaged equal error rate of iris recognition using the proposed method was 4.3006%, which is lower than that of other methods.

1.

Introduction

As human behavioral and physiological characteristics are highly discriminated among individuals, biometric features have been employed in various applications demanding the security and convenience of the user. In particular, biometric technology—such as iris, face, fingerprint, finger-vein, hand geometry, and palm recognition—is used mostly for person authentication or verification.13 Iris recognition uses the unique characteristics of the patterns of the iris; that is, the eye region between the pupil and sclera.3 Because its permanency and usability is very high, iris recognition systems have been considered for use in critical security areas, such as airports and border control. In general, iris recognition largely consists of iris image preprocessing, iris feature extraction, and iris feature matching.4 While the iris image preprocessing and feature extraction have been primarily addressed, the study of iris feature matching has recently become an issue.4 An iris image includes the valid region of the iris patterns and the nonvalid regions of noisy components, such as eyelid, eyelash, and reflection of illumination. Iris image preprocessing algorithms, which detect the noisy factors and the boundaries of the pupil and iris in the image, have been exploited to achieve enhanced iris recognition performance.36 Several methods for extracting the local features of the iris were proposed.3,5,7,8 Among various feature extraction methods, spatial filters based on the Gabor filter have been primarily employed to extract the texture information of iris patterns.3

In the iris feature-matching procedure, it is important to calculate a correct matching score by comparing feature codes. Since the occlusion caused by noise factors influences the matching score, this effect has to be considered in iris feature matching. In order to solve this problem, an occlusion mask has been proposed to avoid the occlusion effect caused by noises, such as eyelid and eyelash.3,7 When generating the iris feature code, the occlusion mask code corresponding to the iris feature code is simultaneously extracted. The occlusion mask code is used to determine whether the extracted iris feature code is valid or nonvalid.3,7 When an iris image is being acquired, the user’s head roll to left or right shoulder or other movement frequently generates rotation of the iris image. Consequently, the rotated iris image leads to the circular shifting of the iris features. If the rotation angles of the iris images are different, the misalignment of the extracted feature codes happens, which reduces the iris recognition accuracy by increasing the false acceptance rate (FAR) and false rejection rate (FRR).

In previous studies, various strategies for robust iris feature matching were introduced. Ives et al. used the one-dimensional (1-D) histogram and the Du measure to obtain the global rotation invariant feature and low computational complexity.8 However, using the global features of an iris histogram degraded the accuracy of iris recognition more than did using the local texture features. To enhance the accuracy, the local texture patterns (LTPs) method was introduced by Du et al.9 The 1-D iris signature that uses the locality extracted by LTPs exhibits the characteristic of being rotation-invariant. While feature matching using LTPs and the Du measure does not require additional computation for matching the iris image that is rotated by the user’s head roll, the 1-D signature of LTPs still has a limitation in terms of enhancing recognition accuracy. To enhance the accuracy of iris recognition, Dong et al. proposed matching strategies based on the personalized weight map.4 Since the texture and patterns in some regions of the iris, without eyelid or eyelash occlusions, are more reliable than those in other regions according to the individual, the authors generated weight maps by considering the reliability of the iris patterns using class-specific training images and online updating. However, if iris images rotated because of the user’s head roll are not included in the training images and online data, these images are not considered for the generation of the personalized weight map. To eliminate the effect of the rotation of the iris images, circular bit-shifting was used in previous studies.3,7,10 As it is difficult to measure the angle of iris rotation from the image of one eye, circular bit-shifting of a fixed length of the iris feature codes is performed. In addition, an approach based on weighted majority voting and template alignment using fixed-length circular bit-shifting was introduced by Ziauddin and Dailey.11 By using bit-shifting of a fixed length in the matching, iris rotation in a limited range is compensated. However, the method of bit-shifting of a fixed length increases the processing time of feature matching and the FAR of iris recognition by moving the imposter distribution to the genuine one.

To overcome these problems, we propose a new iris feature-matching method that uses multi-unit iris images, based on the iris rotation angle. In order to detect both eyes, we use Adaboost and a rapid eye detector (RED) based on iris shape feature and integral imaging. Both the user’s eyes are detected using RED in the approximate candidate region, consisting of the binocular region, which is determined by the Adaboost detector. Then we classify the detected eyes into the left and right eyes, because the iris patterns of the left and right eyes in the same person differ, and they are therefore considered as different classes. Thus we can improve the accuracy of iris recognition by pre-classification of the left and right eyes. By measuring the angle of head roll using the two center positions of the left and right pupils detected by two circular edge detectors (CEDs), we obtain the information about the iris rotation angle. In order to reduce the error and processing time of iris recognition, adaptive bit-shifting based on the measured iris rotation angle is used in feature matching. Further, the recognition accuracy is enhanced by the score fusion of left and right irises.

Table 1 shows summarized comparisons of the proposed method and those of earlier studies. The proposed method has main advantage of recognition accuracy compared to the existing methods. So, in addition to the Table 1, we additionally include the Table 2.

Table 1

Summarized comparisons of the proposed method with those of earlier studies in terms of coping with the iris image rotated by head roll.

CategoryMethodStrengthWeakness
Using iris features of rotation invarianceOne-dimensional (1-D) histogram and Du measure8Obtains the global feature of rotation invariance and low computational complexityThe accuracy of iris recognition using the global features of an iris histogram was more degraded than that using the local texture features
1-D iris signature with locality extracted by the LPTs9Feature matching using local texture patterns (LTPs) and Du measure do not need additional computation for matching the rotation of the iris image caused by head roll1-D signature by LTPs still has the limitation of enhancement of recognition accuracy
Generating iris templates considering weightsMatching methods based on the personalized weight map4Since some parts of iris texture and iris patterns without eyelid or eyelash occlusion are more reliable than other regions according to individual, the weight maps using class-specific training images and online updating can show a higher recognition accuracyIf the iris images rotated by the head roll are not included in the training images and online data, they are not considered for the generation of the personalized weight map
Weighted majority voting11A more reliable iris template is generated by weighted majority voting, which shows better recognition accuracyIf the iris images rotated by the head roll are not included in the training images, they are not considered for the generation of the iris template
Matching by bit-shiftingBit-shifting of fixed length3,7,10,11By matching with bit-shifting of a fixed length, the iris rotation of a limited range is compensatedThis method increases the processing time of feature matching and the false acceptance rate (FAR) of iris recognition due to moving the imposter distribution to the genuine one
Adaptive bit-shifting (proposed method)By matching with adaptive bit-shifting whose length is determined based on measured angle of iris rotation from two eyes, the iris rotation in an unlimited range is compensated, which reduces the processing time and recognition errorImage that includes two eyes is required

Table 2

Summarized comparisons of the proposed method with those of earlier studies in terms of iris recognition accuracy.

CategoryMethodAccuracy of iris recognition
Using iris features of rotation invariance1-D histogram and Du measure8The equal error rate (EER) of their iris recognition method (with CASIA ver. 1 database) is 14%, which is much higher than that of our method (4.3006%)
1-D iris signature with locality extracted by the LPTs9They used the CASIA ver. 1 and USNA databases whereas we used CASIA-Iris-Distance (CASIA-IrisV4) database for experiments. The number of images of CASIA-Iris-Distance is 10,340, which is much larger than CASIA ver. 1 (756) and USNA (1075) databases. The iris diameter of CASIA-Iris-Distance database is about 170 pixels (“acceptable quality” not “good quality” based on previous research12), which is much smaller than that of CASIA ver. 1 and USNA databases (larger than 200 pixels). The pupil areas of CASIA ver. 1 database are manually painted black, which reduces the pupil detection error. And the number of rotated iris images is much smaller in their databases than in our database. In addition, the CASIA-Iris-Distance database was obtained at a distance (2.4 to 3 m), and much noises (such as low illumination, severe off-angle, hair occlusion, and specular reflection on glasses) are included in this database. So the accuracy of iris recognition of our research can be inevitably lower than their method
They measured the accuracy as cumulative math characteristic (CMC) curve (1: n matching), and obtained the accuracy of 97% in the rank 5. However, we measured the accuracy as EER and receiver operating characteristic (ROC) curve (11 matching, which has been widely used as the performance evaluation of biometric system), and obtained the EER of 4.3006%
Since they used different databases, it is difficult to compare the accuracy of our method to that of their method
Generating iris templates considering weightsMatching methods based on the personalized weight map4They used the CASIA ver. 3 (CASIA-IrisV3-Lamp), UBath and ICE2005 databases. Although their accuracy is high (the EER of 0.8% with CASIA-IrisV3-Lamp database), all these databases are acquired at close distance. However, the CASIA-Iris-Distance database (which is used in our experiment) was obtained at a distance (2.4 to 3 m), and much noise (such as low illumination, severe off-angle, hair occlusion, and specular reflection on glasses) is included in this database. The number of rotated iris image is smaller in their databases than in our database. In addition, the iris diameter of our database (CASIA-Iris-Distance database) is about 170 pixels (“acceptable quality” not “good quality” based on previous research12), which is smaller than the databases they used. So the accuracy of iris recognition of our research can be inevitably lower than their method
Since they used different database, it is difficult to compare the accuracy of our method to that of their method
Weighted majority voting11Although their accuracy is high (the EER of 0.06% with CASIA ver. 1 database), the number of images used in our experiments is 10,340, which is much larger than CASIA ver. 1 (756) of their method. The pupil areas of CASIA ver. 1 database are manually painted as black, which reduces the pupil detection error. The iris diameter of CASIA-Iris-Distance database is about 170 pixels (“acceptable quality” not “good quality” based on previous research12), which is much smaller than that of CASIA ver. 1 database (larger than 200 pixels). And the number of rotated iris image is much smaller in their databases than our database. In addition, the CASIA-Iris-Distance database was obtained at a distance (2.4 to 3 m), and much noise (such as low illumination, severe off-angle, hair occlusion, and specular reflection on glasses) is included in this database. So the accuracy of iris recognition of our research can be inevitably lower than their method
Since they used different database, it is difficult to compare the accuracy of our method to that of their method
Matching by bit-shiftingBit-shifting of fixed length3,7,10,11,13In Ref. 13, they showed the EER of left or right iris with CASIA-Iris-Distance database as higher than about 17%, which is much higher than our method
With the same CASIA-Iris-Distance database, the EER of the method of bit-shifting of fixed length was obtained as 5.2278% in our experiment, which is higher than that of the proposed method (4.3006%)
In addition, the processing time of their method was measured as 0.865 ms (in desktop computer with an Intel Core I7 processor with 3.47-GHz speed and 12-GB RAM), which is much larger than that of our method (0.057 ms)
Adaptive bit-shifting (proposed method)The EER of iris recognition with CASIA-Iris-Distance (CASIA-IrisV4) database is 4.3006%. The processing time of our method is measured as 0.057 ms (in desktop computer with an Intel Core I7 processor with 3.47-GHz speed and 12-GB RAM)

In previous studies,1,6,7,1419 they researched about single iris, multi-modal recognitions and age estimation. In Refs. 6 and 7, the authors proposed a new segmentation method of iris region with noisy iris images captured by visible light and a new iris recognition method by combining two matching scores calculated by short- and long-sized Gabor filters, respectively. In Ref. 15, the authors just compared the accuracy of iris recognition with the images captured by near-infrared (NIR) light illuminator of short wavelength to that with the images by the NIR illuminator of long wavelength. In Refs. 16 and 17, the authors proposed a new eyelid detection algorithm and a new eyelash-detection algorithm based on the measured focus score of input image for iris recognition, respectively. In the case of Refs. 6, 7, and 1517, experimental image includes single eye (not left and right eyes). So they did not use the pre-classification of left and right irises, adaptive bit-shifting based on the measured iris rotation angle used in feature matching, and score fusions of left and right irises like the proposed method. In Refs. 14, authors proposed a new iris matching method with noisy iris images captured by visible light. They did the pre-classification of left and right irises by using eyelash distribution and specular reflection points from the single eye (not left and right eyes), which shows the pre-classification error of 9.1%. However, since two eyes are seen in the experimental images of our research, accurate pre-classification of left and right irises can be done based on the detected positions of left and right eyes (the pre-classification error of 0%). In addition, they do not use the score fusions of left and right irises like the proposed method.

In Ref. 18, the authors proposed a device that captures face and both irises images at the same time. Based on the eye position in the face image, the searching radius of the iris region in the iris image is defined. By combining the matching scores of face and both irises, they enhanced the final recognition accuracy. In Refs. 14 and 18, they did not use the adaptive bit-shifting based on the measured iris rotation angle used in feature matching like the proposed method. In Ref. 1, the authors proposed a device that captures fingerprint and finger-vein images at the same time. By combining two scores of fingerprint and finger-vein recognition, they enhanced the final recognition accuracy. In Ref. 19, the authors proposed an age estimation method with the face image based on support vector machine and support vector regression. These researches of Refs. 1 and 19 are not about iris recognition like the proposed method.

This paper is organized as follows. A detailed explanation of the proposed method is given in Sec. 2. In Sec. 3, the experimental results are described, and the conclusions are presented in Sec. 4.

2.

Proposed Method of Multi-Unit Iris Recognition

2.1.

Overview of the Proposed Method

An overview of the proposed method is shown in Fig. 1. When inputting the facial image captured by the camera with a zoom lens and high-resolution image sensor, we detect both eyes using Adaboost20 and RED based on eye shape feature and integral imaging (see “Step 2” of Fig. 1 and Sec. 2.2).21 In general, the left and right irises of the same person are considered as different classes.10 Therefore, we pre-classify the both eyes into “left eye” and “right eye” based on the center position of the detected eyes (see “Step 3” of Fig. 1 and Sec. 2.2). The pre-classified left irises are matched only with the templates of the left iris, and the right irises only with those of the right iris.14 After the boundaries of the pupil and iris are found using the two CEDs, the eyelash, eyelid and reflection occlusions are removed to eliminate the effect of noisy factors (see “Step 4” of Fig. 1 and Sec. 2.3).6,7 In general, the head roll (to left or right shoulder) or the user’s motion generates a rotation of the iris image during iris image acquisition. Since the rotation of the iris image generates a circular rotation of the iris feature, it is important to obtain the angle of iris rotation in order to solve this problem. Therefore, we measure the head roll angle based on the pupil center coordinates of the left and right eyes determined by the two CEDs (see “Step 5” of Fig. 1 and Sec. 2.4). The 1-D Gabor filter is used to generate the iris feature code (see “Step 6” of Fig. 1 and Sec. 2.5).7,14 In addition, the occlusion mask is generated to check whether the corresponding iris region is occluded or not by the noise factor.3,7,14 For the iris feature matching procedure, adaptive bit-shifting based on the head roll angle is used (see “Step 7” of Fig. 1 and Sec. 2.6). The measured HDL (hamming distance (HD) of “left eye”) and HDR (HD of “right eye”) are combined by the weighted sum rule of score level fusion (see “Step 8” of Fig. 1 and Sec. 2.6). Finally, the final score of the weighted sum score is used to determine whether the user is genuine or an imposter by comparing with the pre-determined threshold (see “Step 9” of Fig. 1 and Sec. 2.6).

Fig. 1

Overview of the proposed method.

OE_52_4_047201_f001.png

2.2.

Detecting Both Eyes and Pre-Classification of Left and Right Eyes

To find the approximate binocular region, Adaboost that is based on an eye pair classifier is used.22 To reduce the processing time of detecting the binocular region in a high-resolution facial image, the original image of 4 mega pixels (2352×1728pixels) is down-sampled into one of 336×246pixels as shown in Fig. 2(a). The approximate binocular region of the down-sampled facial image is detected by the Adaboost algorithm as shown in Fig. 2(b).

Fig. 2

An example of detecting the approximate binocular region: (a) down-sampled facial image of the original; (b) detection of the approximate binocular region by Adaboost eye detector.

OE_52_4_047201_f002.png

Using the approximate binocular region, left and right searching regions are determined based on the center position of the horizontal width of the approximate binocular region. In each searching region, the left and right eyes are detected by RED based on the eye shape feature and integral imaging.21 Commonly, the gray level of iris and pupil regions is lower than that of their neighboring regions. In order to detect the eye region, a mask consisting of 3×3 sub-blocks of various sizes is used, as illustrated in Fig. 3(a),21 since the captured iris size differs according to individual variation as well as the Z-distance between camera and human’s eye.

Fig. 3

An example of the detections of both eyes: (a) sub-blocks that are used for rapid eye detector (RED); (b) both eyes detected by RED.

OE_52_4_047201_f003.png

In order to enhance the processing speed, integral imaging technology is used to calculate the means of each sub-block shown in Fig. 3(a).21 As shown in Fig. 3(a), R0 and R1R8 represent the candidate iris region and its neighbor regions, respectively. The mean gray value of R0 is compared with those of R1R8. Only if the mean value of R0 is lower than those of the other subregions (R1R8), the sum of differences of mean values among the center subregion of R0 and the other subregions (R1R8) is measured because of the characteristics of the eye region. By moving the mask [Fig. 3(a)] of various sizes and selecting the position where the maximum sum value is obtained, the eye candidate region is determined. Figure 3(b) shows examples of the detection of both eyes.

Because the left and right iris patterns of the same person are different, a person’s “left eye” and “right eye” are considered as different classes in iris recognition.3 To confirm this, Daugman measured the distribution of HD between the two (“left” and “right”) iris codes extracted from same person. Experimental results showed that the mean and standard deviation of the HD distribution when matching two irises of the same person are very similar to those when matching an imposter and different persons.3 Based on these results, in previous research, Shin et al. enhanced the accuracy of iris recognition by distinguishing the left and right irises.14 Based on the detected positions of the two eyes in Fig. 3(b), the left and right eyes are pre-classified.

2.3.

Extracting the Regions of Pupil, Iris, Eyelid, and Eyelash

Based on the detected eye regions described in Sec. 2.2, accurate pupil and iris regions are detected in the original image by using two CEDs as follows:15,18

argmax(x0,y0),r(x0,y0),r[r(π4π6I(x,y)5πr/12ds+5π65π4I(x,y)5πr/12ds)+max(r02πI(x,y)2πrds)],

Eq. (1)

{x010<x0<x0+10y010<y0<y0+10r<0.8r,
where (x0, y0) and r represent the center position and radius of the iris, respectively, (x0, y0) and r are the center position and radius of the pupil, respectively, and I(x,y) and I(x,y) are the pixel gray values of the positions (x, y) and (x, y), respectively. While a pupil searching range of [0, 2π] radians is used to find the pupil boundary, the iris searching ranges of [-(π/4), π/6] and [5π/6,5π/4] radians are used to determine the iris boundary, as the iris boundary is frequently occluded by the upper and lower eyelids.6,15,18 Figure 4(b) shows an example of the localization of the pupil and iris.

Fig. 4

Iris localization and detection of noise factors: (a) original iris image; (b) detection of inner and outer boundaries of iris; (c) the preprocessed image after removing noises such as eyelashes and eyelid.

OE_52_4_047201_f004.png

The noises generated by the eyelash and eyelid in the localized iris region occlude the iris patterns of the iris region. Because these irregular noises generate incorrect codes of the iris patterns, the accuracy of iris recognition is degraded. To obtain the candidate point of the upper and lower eyelids, upper and lower eyelid detecting masks are employed.6,16,18 The parabolic Hough transform detects the upper and lower eyelid line using the upper and lower candidate points. After removing the eyelid, the eyelashes are detected using the characteristics of gray level and direction. The gray level of eyelashes is lower than that of its neighboring pixel. Eyelashes have the feature that the vertical component is stronger than the horizontal component. An eyelash-detecting mask that reflects these characteristics is used to eliminate eyelashes.6,17,18 Finally, the result of eliminating eyelashes and eyelid is seen in Fig. 4(c).

2.4.

Measuring the Angle of Head Roll

When head roll to left or right shoulder occurs during acquisition of a facial image, as shown in Fig. 5, the distance between the y coordinates of each center point of the left and right eye in the captured facial image is generated; if the extent of the head roll increases, the gap also increases. Based on this, we measure the head roll angle using the x and y distances between the center coordinates of both irises. The center coordinates of the left and right pupils extracted by the two CEDs in an original facial image were employed to calculate the x and y distances. In Fig. 5, θr, (xL, yL), and (xR, yR) represent the head roll angle, the center coordinate of the left pupil, and that of the right pupil, respectively. The head roll angle (θr) is measured as

Eq. (2)

θr=tan1{yLyRxRxL}.
If yL of the left pupil is larger than yR of the right pupil, θr becomes a positive roll angle. Conversely, if yL of the left pupil is smaller than yR of the right pupil, as shown in Fig. 5, θr becomes a negative roll angle.

Fig. 5

An example of measuring the head roll angle using the center coordinates of left and right pupils.

OE_52_4_047201_f005.png

2.5.

Generating Iris Code Using 1-D Gabor Filter

The iris diameter of the captured iris image varies according to personal variance and the Z-distance between the user and the camera sensor. In addition, since the contraction and dilation of the pupil area change in response to illumination variation, the length between the inner and outer boundaries of the captured iris becomes inconsistent. To decrease these variances, the following normalization procedures are performed.7,14,18 The rectangular image of the polar coordinates is generated from the preprocessed iris image of Fig. 4(c). The image is then split into eight tracks and 256 sectors, as shown in Fig. 6. Then, a further normalized image of 256×8pixels from the image in Fig. 6 is obtained by using the 1-D Gaussian kernel to calculate the weighted mean of several gray pixels in each track in the vertical direction (ρ axis).7,14,18

Fig. 6

Normalized image from the preprocessed iris image of Fig. 4(c).

OE_52_4_047201_f006.png

With the normalized image of 256×8pixels, we extract the iris binary code of 2048 bits by using a 1-D Gabor filter of 25 kernel size and 1/20 central frequency.7,14,18 To guarantee the reliability of the iris feature code, the mask code of the 2048 binary bits is also extracted, which represents whether the corresponding iris code is extracted from the occluded area (white region inside the iris area in Fig. 6) or not.7,14,18 Only the iris code whose mask code is valid (extracted from nonoccluded area by noises) is used for calculating the HD score.7,14,18 As both the left and right irises in the captured facial image are used for iris recognition, the iris codes and mask codes of both irises are generated.

2.6.

Iris Code Matching by Adaptive Bit-Shifting and Score Fusions of Two HD of Left and Right Irises

Since the rotation of the iris image as shown in Fig. 5 leads to the circular shifting of the iris feature, misalignment of the iris patterns between the enrolled and recognized iris images occurs in the iris-matching procedure. To reduce the misalignment of iris patterns caused by head roll, circular bit-shifting of the iris feature codes was used in previous research studies.3,7,10,11 However, as it is difficult to obtain an accurate head roll angle from the image of one eye, circular bit-shifting, which covers a wide range of rotation angles, is performed, thereby increasing the processing time of matching and the matching error rate.

To overcome these problems, adaptive bit-shifting based on the measured iris rotation angle is utilized in this study.

Using the enrolled and input iris images, the two head roll angles in Fig. 5 are calculated as θrE and θrR, respectively. From that, the angle difference θrd is calculated as

Eq. (3)

θrd=θrEθrR.
If θrd is a positive value, the enrolled iris image is rotated in the counterclockwise direction by θrd relative to the recognized iris image. If it is a negative value, the enrolled iris image is rotated in the clockwise direction by θrd relative to the recognized iris image.

When measuring the head roll angle using the pupil center positions of the left and right eyes, as shown in Fig. 5, a detection error of the iris and pupil regions by the two CEDs can inevitably occur. Therefore, we consider this error when determining the range of adaptive bit-shifting as

Eq. (4)

fs=θrdθreθs,fe=θrd+θreθs,

Eq. (5)

Fs={aif|θrd|<θrTfselse,

Eq. (6)

Fe={aif|θrd|<θrTfeelse,

Eq. (7)

FsxFe,
where fs and fe denote the starting and ending values of the bit-shifting range, respectively. The optimal margin, θre, considering the detection error (of the iris and pupil regions by two CEDs) was experimentally determined using training data. θs denotes the degree between each sector of Fig. 6. Since the 256 sectors correspond to 360 deg in Fig. 6, θs is determined as 1.40625 deg (360deg/256).

As shown in Eq. (4), if the measured θrd is relatively smaller than the margin θre, θre affects the calculation of fs and fe significantly, and the credibility of the calculated fs and fe is inevitably degraded. In order to solve this problem, we use an additional threshold, θrT, as shown in Eqs. (5) and (6). The optimal threshold, θrT, was also experimentally determined using training data. Therefore, only if the absolute value of the measured θrd is greater than or the same as the threshold θrT are fs and fe used as the starting and ending values of the bit-shifting range; otherwise, the pre-determined values of a and a are used as the starting and ending values of bit-shifting range, as shown in Eqs. (5) and (6). The optimal value of a was experimentally determined using training data.

Since fs and fe are represented as real numbers, and the starting and ending value of the bit-shifting range should be shown as an integer value, the floor and ceiling functions are used in Eqs. (5) and (6). That is, fs and fe represent the greatest integer value not greater than fs, and the lowest integer value not smaller than fe, respectively.23 For example, assuming that the measured θrd is 5.3 deg and θre is 0.2 deg, fs and fe of Eq. (4) are calculated as about 3.627[(5.30.2)/1.40625] and 3.911[(5.3+0.2)/1.40625], respectively, since θs are 1.40625 deg. Assuming that θrT is 2 deg, since both the fs and fe are greater than θrT, fs and fe of Eqs. (5) and (6) are calculated as 3 and 4, respectively, and they become the final starting and ending values [Fs and Fe of Eq. (7)]. Thus the iris code matching with bit-shifting is performed in the range from 3 to 4 bits.

Based on the range of adaptive bit-shifting of Eq. (7), two HDs scores (HDL and HDR) of “left eye” and “right eye” are calculated on the basis of the left and right iris templates, respectively.

To improve the accuracy of iris recognition, the two HD scores of HDL and HDR are combined by the weighted SUM rule of score level fusion as follows:

Eq. (8)

HDF=W×HDL+(1W)×HDR.
HDF represents the final HD score of HDL and HDR combined by the weighted SUM rule. The optimal weight W was experimentally determined using training images. Finally, HDF is used to discriminate whether the user is genuine or an imposter by comparison with the pre-determined threshold.

3.

Experimental Results

Although there are various iris databases of CASIA,24 UPOL,25 ICE,26 UBIRIS,27 IITD,28 MMU,29 and University of Bath30 etc., there is no open database that includes both irises except the CASIA-Iris-Distance database.24 To evaluate the performance of the proposed method, we used the database of CASIA-Iris-Distance (CASIA-IrisV4), which consists of 2,567 images obtained from 284 classes of 142 volunteers.24 CASIA-Iris-Distance includes iris images captured by the self-developed long-range multi-modal biometric image acquisition and recognition system (LMBS).24 Detail specifications and explanations of physical system are not unveiled.24 Magnification factor and focal length of the camera lens are not unveiled, either.

Each image of CASIA-Iris-Distance captured at long Z-distance includes both eyes in a facial image of 2352×1728pixels. Since the entire image that includes both the eyes is 2352×1728pixels, the pixel diameter of the iris area is less than about 170 pixels, which can be regarded as “acceptable quality” and not “good quality.” Based on previous research, an iris image having a diameter over 200 pixels is considered as “good quality,” and iris diameters of 150 to 200 and 100 to 150 pixels are regarded as “acceptable quality” and “marginal quality,” respectively.12 Because the CASIA-Iris-Distance database is only open database that includes both irises, we used this database for our experiment, although the resolution of the iris region is lower than in other conventional iris databases. In order to consider the various capturing environments along the long Z-distance [from a distance of 2.4 to 3 m (Ref. 24)], diverse noise factors such as low illumination, severe off-angle, hair occlusion, and specular reflection on glasses were generated in both eyes in the CASIA-Iris-Distance database, which affected the detection performance of the iris region. Thus 2068 images were used for our experiments, excluding the images that were too noisy, which caused large detection errors of the iris regions, since enhancing the performance of the detection algorithm is not the goal of our research. As the CASIA-Iris-Distance database of the 2068 images is too small for us to accurately evaluate the performance of the proposed method, we expanded the CASIA-Iris-Distance database by artificially rotating the original image as 5, 5, 15, and 15deg, respectively. From that, we obtain the larger database of 10,340 images (including the original and rotated images), which is much larger than the original CASIA-Iris-Distance database.

With the expanded database of 10,340 images, the performances were measured based on four-folds cross validation. Cross validation has been widely used for pattern classification,19,31 and it separates the training and testing data in order to guarantee the confidence level of experiment. In case of fourfolds cross validation, the whole database is equally separated into four parts as A, B, C, and D. In the first trial, the three-fourths images (A, B, C) of whole database are used for training (“training data set 1”), and the remaining one-fourth images (D) are used for testing (“testing data set 1”). In the second trial, another three-fourths images (A, C, D) are used for training (“training data set 2”), and the remaining one-fourth images (B) are used for testing (“testing data set 2”). Based on this procedure, the average accuracy with four testing data sets from the four trials is measured as shown in Table 3 and Fig. 7.

Table 3

Comparisons of average EERs (%) of the proposed and other methods with four testing data sets.

MethodWithout pre-classification (left and right eyes)With pre-classification (left and right eyes)With pre-classification (left and right eyes)+weighted SUM rule
Bit-shifting of fixed length334.43819.80405.2278
Adaptive bit-shifting (proposed method)32.44738.11174.3006

Fig. 7

ROC curves of the proposed method (pre-classification+adaptive bit-shifting+weighted SUM rule) and other methods.

OE_52_4_047201_f007.png

Since the measurement error of the head roll angle affects the adaptive bit-shifting in iris code matching, the error was measured by comparing the angles calculated by the proposed method and manually obtained measurements, as the first experiment. The experimental results with four training data sets (“training data set 1,” “training data set 2” … “training data set 4”) showed that the average measurement error was about 0.1322 deg, and it was considered as θre of Eq. (4).

As the second experiment, we tested the performances of the pre-classified “left eye” and “right eye” and “both eyes” without pre-classification, as summarized in Table 3. In the pre-classification test, the input iris code determined as “left eye” was matched only with the enrolled iris code of “left eye,” and that determined as “right eye” was matched only with the enrolled iris code of “right eye,” as shown in Fig. 1. In case of “both eyes” without pre-classification, the input iris code was matched with the enrolled iris codes of both irises.

As the third test, we compared the performance of bit-shifting a fixed length as proposed by Daugman3 and the adaptive bit-shifting used in the proposed method. We obtained the four optimal lengths of bit-shifting according to “left iris,” “right iris,” and “both irises” matching from each “training data sets (training data set 1, 2 … 4)” in terms of recognition accuracy, in order to measure the accuracy of Daugman’s bit-shifting of a fixed length. Based on these four optimal lengths, the four equal error rate (EERs) by the Daugman’s method were measured with four testing data sets (testing data set 1, 2 … 4), and the average EER was shown in Table 3 and Fig. 7.

Table 3 lists the EER of iris recognition with/without pre-classification according to Daugman’s bit-shifting of a fixed length3 and the adaptive bit-shifting of the proposed method. The EER is the error rate when the FAR value is similar to that of the FRR; it has been widely used as the performance criterion of biometric systems.32 FAR is the error rate of accepting an un-enrolled person as an enrolled one, while FRR is that of rejecting the enrolled person as an un-enrolled one. The EERs without/with pre-classification were obtained by averaging the EERs of the left and right irises from four testing data sets.

In Table 3, we can see that iris code matching with pre-classification improves the accuracy of iris recognition as compared to that without pre-classification. In addition, the EER of adaptive bit-shifting was lower than that of fixed-length bit-shifting. In conclusion, the EER with pre-classification and adaptive bit-shifting was 8.1117%, which was much smaller than that (34.4381%) without pre-classification and by bit-shifting of a fixed length.

In the fourth experiment, we compared the performances of bit-shifting of a fixed length and adaptive bit-shifting using both the pre-classification (of left and right irises) and weighted SUM rule [to combine two matching scores of left and right irises of Eq. (8)]. As shown in Table 3, the EER (4.3006%) of adaptive bit-shifting was lower than that (5.2278%) of bit-shifting of fixed length; the EER of using all the proposed pre-classification methods, adaptive bit-shifting, and weighted SUM rule was 4.3006%, and it was lower than for other cases, as seen in Table 3.

Figure 7 shows the receiver operating characteristic (ROC) curves of the proposed method (pre-classification+adaptive bit-shifting+weighted SUM rule) and other methods. The ROC curve is comprised of the genuine acceptance rate (GAR) and the FAR, where GAR is “100—FRR” (%). In Fig. 7, “pre-classification+adaptive bit-shifting” and “pre-classification+bit-shifting of fixed length” denote the ROC curves of uni-modal iris recognition not combining the two scores of left and right irises. In these cases, the ROC curves were obtained by averaging the accuracies for left and right irises. From the Fig. 7, we confirm that the accuracy level of the proposed method (pre-classification+adaptive bit-shifting+weighted SUM rule) is higher than that of other methods.

Since the pixel diameter of the iris area in the CASIA-Iris-Distance database is almost less than 170 pixels, which can be regarded as “acceptable quality” not “good quality” based on previous research,12 and the image was captured in various capturing environments along a long Z-distance (from 2.4 to 3 m away24) including various noises, the final EER (4.3006%) of the proposed iris recognition is relatively higher than that of conventional iris recognition system.

If the image resolutions of both irises reduce according to the increase of Z distance, the decrease of sensor resolution, or the decrease of lens magnification factor, the accuracy of our method can be inevitably degraded due to the increased error of segmentation of iris, pupil, eyelid, and eyelash area, and the reduced amount of iris patterns in the iris area. That is often the case with the conventional iris recognition system. However, the proposed adaptive bit-shifting is not much affected by these images of lower resolution because the additional margin for bit-shifting is used in the proposed method, considering the error of estimating the rotation angle of both irises.

In the next experiment, we compared the average processing time of calculating the HD of one iris (left or right iris) per each template, using adaptive bit-shifting and Daugman’s method, as shown in Table 4. The experiments were performed on a desktop computer with an Intel Core I7 processor with 3.47-GHz speed and 12-GB RAM. Since the number of bit-shifting is smaller in the case of the adaptive bit-shifting, its processing speed is much faster than that of the bit-shifting of a fixed length. Although the difference in the processing time is only 0.808 ms (0.8650.057), it is measured when matching with only one iris template. In various applications, iris recognition has been used as an identification system (1:N matching),33 where, if the number of iris templates increases on the identification system, the difference in the processing time inevitably becomes larger. For example, assuming that the number of iris templates is 100,000, the difference in the processing time is 80.8 s (0.808ms×100,000). In addition, if the comparison is executed on a desktop computer or mobile device having slower CPU, the difference in the processing time becomes much larger.

Table 4

Comparisons of processing times (ms).

MethodWith pre-classification (left and right eyes)+weighted SUM rule
Bit-shifting of fixed length30.865
Adaptive bit-shifting (proposed method)0.057

In the last experiment, we measured the processing time of the sub-algorithms of the proposed method, as shown in Table 5. The processing time for detecting both eyes using Adaboost and RED was 27.3 ms. The processing time of extracting the regions of pupil, iris, eyelid, and eyelash for both irises was 312.3 ms, that for generating the iris code using a 1-D Gabor filter on both irises was 22 ms, and that for iris code matching by adaptive bit-shifting on both irises was 0.114 ms. Consequently, the total processing time using the proposed method was 361.714 ms.

Table 5

Processing time of each module of the proposed method (ms).

Each moduleProcessing time
Detecting both eyes27.3
Pre-classification of left and right eyes0
Extracting the regions of pupil, iris, eyelid, and eyelash on both irises312.3
Measuring the angle of head roll0
Generating iris codes using 1-D Gabor filter on both irises22
Iris code matching by adaptive bit-shifting on both irises0.114
Score level fusion by weighted SUM rule0
Total361.714

4.

Conclusions

We propose a novel iris recognition method based on multi-unit iris images. In order to detect both eyes, we use Adaboost and a RED based on iris shape feature and integral imaging. Then, we improve the accuracy of iris recognition by pre-classification of the left and right eyes. By measuring the angle of head roll (to left or right shoulder) using the two center positions of the left and right pupils detected by two CEDs, we obtain the information about iris rotation angle. In order to reduce the error and processing time of iris recognition, adaptive bit-shifting based on the measured iris rotation angle is used in feature matching. Finally, the recognition accuracy is enhanced by the score fusion of the left and right irises. According to the experimental results, the proposed method enhances the performance of iris recognition in comparison with existing methods. In future works, we intend to investigate the adaptive bit-shifting method based on the estimated rotation angle of iris from an image that includes only one eye.

Acknowledgments

This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (No. 2012R1A1A2038666), and in part by the Public Welfare & Safety Research program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (No. 2012-0006554). Portions of the research in this paper use the CASIA-IrisV4 collected by the Chinese Academy of Sciences’ Institute of Automation (CASIA).

References

1. 

D. T. Nguyen et al., “Combining touched fingerprint and finger-vein of a finger, and its usability evaluation,” Adv. Sci. Lett., 5 (1), 85 –95 (2012). http://dx.doi.org/10.1166/asl.2012.2177 1936-6612 Google Scholar

2. 

M. A. Turk and A. P. Pentland, “Face recognition using eigenfaces,” in Proc. IEEE Comput. Soc. Conf. on Comput. Vis. and Pattern Recognit., 586 –591 (1991). Google Scholar

3. 

J. Daugman, “How iris recognition works,” IEEE Trans. Circ. Syst. Video Technol., 14 (1), 21 –30 (2004). http://dx.doi.org/10.1109/TCSVT.2003.818350 ITCTEM 1051-8215 Google Scholar

4. 

W. Dong, Z. Sun and T. N. Tan, “Iris matching based on personalized weight map,” IEEE Trans. Pattern Anal. Mach. Intell., 33 (9), 1744 –1757 (2011). http://dx.doi.org/10.1109/TPAMI.2010.227 ITPIDJ 0162-8828 Google Scholar

5. 

L. Ma et al., “Personal identification based on iris texture analysis,” IEEE Trans. Pattern Anal. Mach. Intell., 25 (12), 1519 –1533 (2003). http://dx.doi.org/10.1109/TPAMI.2003.1251145 ITPIDJ 0162-8828 Google Scholar

6. 

D. S. Jeong et al., “A new iris segmentation method for non-ideal iris images,” Image Vis. Comput., 28 (2), 254 –260 (2010). http://dx.doi.org/10.1016/j.imavis.2009.04.001 IVCODK 0262-8856 Google Scholar

7. 

H.-A. Park and K. R. Park, “Iris recognition based on score level fusion by using SVM,” Pattern Recognit. Lett., 28 (15), 2019 –2028 (2007). http://dx.doi.org/10.1016/j.patrec.2007.05.017 PRLEDG 0167-8655 Google Scholar

8. 

R. W. Ives, A. J. Guidry and D. M. Etter, “Iris recognition using histogram analysis,” in Proc. IEEE Conf. Record of the 38th Asilomar Conf. on Signals, Systems and Computers, 562 –566 (2004). Google Scholar

9. 

Y. Du et al., “Use of one-dimensional iris signatures to rank iris pattern similarities,” Opt. Eng., 45 (3), 037201 (2006). http://dx.doi.org/10.1117/1.2181140 OPEGAR 0091-3286 Google Scholar

10. 

J. G. Daugman, “High confidence visual recognition of persons by a test of statistical independence,” IEEE Trans. Pattern Anal. Mach. Intell., 15 (11), 1148 –1161 (1993). http://dx.doi.org/10.1109/34.244676 ITPIDJ 0162-8828 Google Scholar

11. 

S. Ziauddin and M. N. Dailey, “Iris recognition performance enhancement using weighted majority voting,” in Proc. 15th IEEE Int. Conf. on Image Process., 277 –280 (2008). Google Scholar

12. 

J. R. Matey et al., “Iris on the move: acquisition of images for iris recognition in less constrained environments,” Proc. IEEE, 94 (11), 1936 –1947 (2006). http://dx.doi.org/10.1109/JPROC.2006.884091 IEEPAD 0018-9219 Google Scholar

13. 

S. Arora, N. D. Londhe and A. K. Acharya, “Human identification based on iris recognition for distant images,” Int. J. Comput. Appl., 45 (16), 32 –39 (2012). http://dx.doi.org/10.5120/6866-9470 Google Scholar

14. 

K. Y. Shin et al., “New iris recognition method for noisy iris images,” Pattern Recognit. Lett., 33 (8), 991 –999 (2012). http://dx.doi.org/10.1016/j.patrec.2011.08.016 PRLEDG 0167-8655 Google Scholar

15. 

S. R. Cho et al., “Mobile iris recognition system based on the near infrared light illuminator of long wavelength and band pass filter and performance evaluations,” J. Korea Multimed. Soc., 14 (9), 1125 –1137 (2011). Google Scholar

16. 

Y. K. Jang, B. J. Kang and K. R. Park, “A study on eyelid localization considering image focus for iris recognition,” Pattern Recognit. Lett., 29 (11), 1698 –1704 (2008). http://dx.doi.org/10.1016/j.patrec.2008.05.001 PRLEDG 0167-8655 Google Scholar

17. 

B. J. Kang and K. R. Park, “A robust eyelash detection based on iris focus assessment,” Pattern Recognit. Lett., 28 (13), 1630 –1639 (2007). http://dx.doi.org/10.1016/j.patrec.2007.04.004 PRLEDG 0167-8655 Google Scholar

18. 

Y. G. Kim et al., “Multimodal biometric system based on the recognition of face and both irises,” Int. J. Adv. Robot. Syst., 9 (65), 1 –6 (2012). Google Scholar

19. 

S. E. Choi et al., “Age estimation using a hierarchical classifier based on global and local facial features,” Pattern Recognit., 44 (6), 1262 –1281 (2011). http://dx.doi.org/10.1016/j.patcog.2010.12.005 PTNRA8 0031-3203 Google Scholar

20. 

P. I. Wilson and J. Fernandez, “Facial feature detection using Haar classifiers,” J. Comput. Sci. Colleges, 21 (4), 127 –133 (2006). Google Scholar

21. 

B. Kim, H. Lee and W.-Y. Kim, “Rapid eye detection method for non-glasses type 3D display on portable devices,” IEEE Trans. Cons. Electron., 56 (4), 2498 –2505 (2010). http://dx.doi.org/10.1109/TCE.2010.5681133 ITCEDA 0098-3063 Google Scholar

22. 

M. Castrillón et al., “A comparison of face and facial feature detectors based on the Viola–Jones general object detection framework,” Mach. Vis. Appl., 22 (3), 481 –494 (2011). MVAPEO 0932-8092 Google Scholar

23. 

Floor and ceiling functions,” (2013) http://en.wikipedia.org/wiki/Floor_and_ceiling_functions Feb. 2013). Google Scholar

24. 

CASIA Iris Image Database,” (2013) http://biometrics.idealtest.org/ Mar. ). 2013). Google Scholar

25. 

Iris Database,” (2013) http://phoenix.inf.upol.cz/iris/ Mar. ). 2013). Google Scholar

26. 

ICE—Iris Challenge Evaluation,” (2011) http://iris.nist.gov/ICE/ Feb. 2013). Google Scholar

27. 

H. Proença and L. A. Alexandre, “Ubiris: a noisy iris image database,” in Proc. 13th Int. Conf. on Image Anal. and Process., 970 –977 (2005). Google Scholar

28. 

IIT Delhi Iris Database (Version 1.0),” (2013) http://www4.comp.polyu.edu.hk/~csajaykr/IITD/Database_Iris.htm Mar. ). 2013). Google Scholar

29. 

MMU Iris Database,” (2013) http://pesona.mmu.edu.my/~ccteo/ Mar. ). 2013). Google Scholar

30. 

Bath Iris Image Database,” (2013) http://www.smartsensors.co.uk/information/bath-iris-image-database/ Mar. ). 2013). Google Scholar

31. 

J. Suo et al., “Design sparse features for age estimation using hierarchical face model,” in Proc. 8th IEEE Int. Conf. on Automatic Face & Gesture Recognit., 1 –6 (2008). Google Scholar

32. 

Z. Zhu and T. S. Huang, Multimodal surveillance—sensors, algorithms, and systems, Artech House, Boston (2007). Google Scholar

33. 

F. Hao, J. Daugman and P. Zieliński, “A fast search algorithm for a large fuzzy database,” IEEE Trans. Inform. Forensic. Security, 3 (2), 203 –212 (2008). http://dx.doi.org/10.1109/TIFS.2008.920726 1556-6013 Google Scholar

Biography

OE_52_4_047201_d001.png

Kwang Yong Shin received his BS in electronics engineering from Dongguk University, Seoul, South Korea, in 2008. He is currently pursuing a combined course of MS and PhD degrees in electronics and electrical engineering at Dongguk University. His research interests include biometrics and pattern recognition.

OE_52_4_047201_d002.png

Yeong Gon Kim received his BS in computer engineering from Dongguk University, Seoul, South Korea, in 2011. He is currently pursuing a PhD degree in electronics and electrical engineering at Dongguk University. His research interests include biometrics and pattern recognition.

OE_52_4_047201_d003.png

Kang Ryoung Park received his BS and MS in electronic engineering from Yonsei University, Seoul, Korea, in 1994 and 1996, respectively. He also received his PhD in the electrical and computer engineering, Yonsei University, in 2000. He was an assistant professor in the Division of Digital Media Technology at Sangmyung University until February 2008. He is currently an associate professor in the Division of Electronics and Electrical Engineering at Dongguk University. He is also a research member of BERC. His research interests include computer vision, image processing, and biometrics.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Kwang Yong Shin, Yeong Gon Kim, and Kang Ryoung Park "Enhanced iris recognition method based on multi-unit iris images," Optical Engineering 52(4), 047201 (3 April 2013). https://doi.org/10.1117/1.OE.52.4.047201
Published: 3 April 2013
Lens.org Logo
CITATIONS
Cited by 18 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Iris recognition

Databases

Eye

Head

Sensors

Feature extraction

Image enhancement

RELATED CONTENT

Fingerprint + Iris = IrisPrint
Proceedings of SPIE (May 14 2015)
Iris recognition with compact zero-crossing-based coding
Proceedings of SPIE (October 12 2006)
Texture based iris recognition system
Proceedings of SPIE (April 16 2008)

Back to Top