Open Access
3 March 2015 Segmentation method of eye region based on fuzzy logic system for classifying open and closed eyes
Ki Wan Kim, Won Oh Lee, Yeong Gon Kim, Hyung Gil Hong, Eui Chul Lee, Kang Ryoung Park
Author Affiliations +
Abstract
The classification of eye openness and closure has been researched in various fields, e.g., driver drowsiness detection, physiological status analysis, and eye fatigue measurement. For a classification with high accuracy, accurate segmentation of the eye region is required. Most previous research used the segmentation method by image binarization on the basis that the eyeball is darker than skin, but the performance of this approach is frequently affected by thick eyelashes or shadows around the eye. Thus, we propose a fuzzy-based method for classifying eye openness and closure. First, the proposed method uses I and K color information from the HSI and CMYK color spaces, respectively, for eye segmentation. Second, the eye region is binarized using the fuzzy logic system based on I and K inputs, which is less affected by eyelashes and shadows around the eye. The combined image of I and K pixels is obtained through the fuzzy logic system. Third, in order to reflect the effect by all the inference values on calculating the output score of the fuzzy system, we use the revised weighted average method, where all the rectangular regions by all the inference values are considered for calculating the output score. Fourth, the classification of eye openness or closure is successfully made by the proposed fuzzy-based method with eye images of low resolution which are captured in the environment of people watching TV at a distance. By using the fuzzy logic system, our method does not require the additional procedure of training irrespective of the chosen database. Experimental results with two databases of eye images show that our method is superior to previous approaches.

1.

Introduction

Eye blinks occur in the majority of primates, including humans. It happens spontaneously when humans are sneezing and can be consciously used to attract a partner. Eye blink frequency has been studied as a factor in fatigue and drowsiness detection systems to alert drivers.14 In other research,5 the eye fatigue measurement is researched in display with the assumption being that people blink more than usual when fatigued.5

Although not associated with drowsiness or fatigue, there are many studies about eye blink. The use of startled eye blink as a physiological measure was researched by Chittaro and Sioni,6 while Tada et al. studied eye blink behavior in 71 primates in terms of their evolution.7 Champaty et al. researched the method of correcting a gait abnormality called foot drop using functional electrical stimulation when people blink their eyes.8 For people who are unable to type on a keyboard, Ashtiani and MacKenzie developed a typing program using eye blink and gaze tracking.9

There have been many previous researches about eye blink detection, and they can be categorized into nonimage-based and image-based methods. In the nonimage-based method, they used two electrode pads to detect the correct electro-oculography (EOG) signal of the eyelid3 and EOG system for eye movement analysis.10 These sensors are attached around the muscles of the eyes and used to classify eye openness and closure by obtaining electronic signals. Other methods use electromyography,6,8 which can analyze muscle signals to classify eye openness and closure. These methods have the advantage that their objective measurements permit detailed numerical analysis. However, they have the disadvantages that the sensor must be attached to the user’s body, and the stimulation signals from a body can disturb the desired signals (i.e., produce noise), which can limit the user action.

To overcome these problems, the image-based method can be considered as an alternative. Because this method does not use the attached sensors, the user’s convenience can be enhanced through allowing the natural movement of the head or body. In previous research,11 they explained various methods of eye localization, such as a probabilistic framework, the adaptive boosting (AdaBoost), support vector machine (SVM), and general-to-specific model definitions with comparisons of eye localization results. Image-based methods include video-based and single image-based methods.

Video-based methods can detect eye blink based on the information of successive images. Lalonde et al. used the scale invariant feature transform and differences between consecutive images for eye blink detection.12 Mohanakrishnan et al. proposed the method of detecting eye blink based on eyelid and face movement.13 Lee et al. researched eye blink detection using both the cumulative difference of the number of black pixels of the eye area in successive frames and the ratio of height to width of the eye area in an image.14 Although the accuracy of the video-based method is usually high, it takes more processing time to extract the information of eye blink detection from plural images.

Single image-based methods can be divided into those with training and without training. The former method detects eye blink based on the trained model. However, the performance of eye blink detection is dependent on the training results and requires the additional procedure of training with a new database. Jo et al. used the SVM based on the sparseness and kurtosis, and the features by principal component analysis (PCA) + linear discriminant analysis for detecting open or closed eyes.2 Bacivarov et al. developed an eye blink detection method that employed feature tracking around the eye using an active appearance model.15 The methods based on user-specific eye template matching also exist.16,17 Wu and Trivedi proposed the method of blink detection based on tensor subspace analysis.18 Lenskiy and Lee used the method using neural network approximation based on skin color histogram, and the probability density functions of the facial feature class is also used.19 Hoang et al. used the PCA method for classifying open and closed eye for blink detection.20 Trutoiu et al. proposed the method using the model based on PCA for animating eye blink.21

As the single image-based method without training, the method based on iris detection was researched by Colombo et al.22 However, the performance enhancement of their method of detecting eye open or close is limited due to using only the simple method of measuring the disparity of the y axis of the detected iris. In previous research,23 they propose the method of eye detection based on skin color, rules, and geometrical relationships of facial components. However, their method requires additional lip detection before eye localization because the geometrical relationships are based on lip position. In other research,24 they proposed an eye detection method and the system for monitoring a driver’s state based on the eyes. However, their method requires two camera systems of visible light and near-infrared light. In addition, the image resolution of the eye region is large in their experiment because the Z distance between the camera and driver is not far.

Most previous researches were done with eye images of high resolution. However, considering the environment where a user watches TV at a distance, and the camera is positioned close to the TV, the image resolution of the eye image is very low. To overcome the above problems of previous researches and for the accurate detection of eye blink with the images of low resolution, we propose a new fuzzy-based method for classifying eye openness and closure. The proposed method uses I and K color information from the HSI and CMYK color spaces, respectively, for eye segmentation. The eye region is binarized using the fuzzy logic system based on I and K inputs, which is less affected by eyelashes and shadow around the eye. Through the fuzzy logic system, the combined image of the I and K pixels is obtained. In order to reflect the effect by all the inference values (IVs) on calculating the output score of the fuzzy system, we use the revised weighted average method (RWAM), where all the rectangular regions by all the IVs are considered for calculating the output score. Then, the final classification of eye openness or closure is made based on the standard deviation of the vertical pixel length calculated from the binarized image. In our research, the classification of eye openness or closure is successfully made with eye images of low resolution, which are captured in the environment of people watching TV at a distance. By using the fuzzy logic system, our method does not require the additional procedure of training irrespective of the database. In this research, our main contribution is obtaining a more accurate binarized image of the eye region for eye segmentation by combining the I and K images based on a fuzzy logic system, which can enhance the accuracy of eye-state classification. Any eye-state classification method can be used with our new method of eye segmentation. Table 1 shows the comparisons of previous and the proposed methods for detecting eye blink.

Table 1

Comparison of previous and proposed eye blink detection methods.

CategoryMethodStrengthsWeaknesses
Nonimage-basedBased on electro-oculography3,10 and electromyography6,8Their objective measurements permit detailed numerical analysis.- The sensor must be attached to the user’s body.
- The stimulation signals from a body can disturb the desired signals, which can limit the user action.
- User’s inconvenience is high by attaching the sensor.
Image-basedVideo-basedBased on scale invariant feature transform and difference image,12 eyelid and face movement,13 and the cumulative difference of the number of black pixels of eye area in successive frames14The accuracy of blink detection is usually high.- It takes more processing time to extract the information of eye blink detection from plural images.
Single image-basedTraining-basedBased on the support vector machine,2 active appearance model,15 user-specific eye template matching,16,17 tensor subspace analysis,18 neural network and probability density functions,19 and principal component analysis20,21Less processing time is required than video-based method- The performance of eye blink detection is dependent on the training results.
- It requires the additional procedure of training with new database.
Non-training-basedIris detection,22 eye detection based on skin color, rules, and geometrical relationships of facial components,23 using two cameras of visible light and near-infrared (NIR) light24 and fuzzy-based eye segmentation (proposed method)It does not require the additional procedure of training irrespective of database.- The performance enhancement of detecting eye blink is limited due to only using the simple method of measuring the disparity of y axis of the detected iris.22
- Their method requires the additional lip detection before eye localization because the geometrical relationships are based on lip position.23
- Additional NIR camera is required, and the image resolution of eye region is large in their experiment because the Z distance between the camera and driver is not far.24
- Heuristic design of fuzzy rule and function is required (proposed method).

Our paper is organized as follows. The proposed method is explained in Sec. 2. In Sec. 3, we describe and analyze the experimental results. Finally, our conclusions are summarized in Sec. 4.

2.

Proposed Method

2.1.

Overview of the Proposed Approach

Figure 1 shows an overview of the proposed method. First, we obtain an RGB eye image. Then, we normalize the I and K values to the range [0, 1] in order to use them as inputs to the fuzzy system. Next, we obtain the output image from the fuzzy system combining each pixel of the I and K images. Each pixel of the output image has a range from 0 to 1, and this image is converted to one where each pixel has the range from 0 to 255 by simple linear scaling.

Fig. 1

Flow chart of the proposed method.

OE_54_3_033103_f001.png

The image is then binarized using a specific threshold and we execute component labeling to select the biggest eye blob area. Then we project black pixels in the vertical direction and calculate the standard deviation of the vertical lengths of the projected black pixels. Finally, the eye openness or closure is classified based on the standard deviation.

2.2.

Eye Image Preprocessing

2.2.1.

Detection of the eye region

To obtain the eye region from the input image, we should detect the face region in the input image. We use the AdaBoost method, which has been widely used, for face detection.25 The AdaBoost is used to detect the region of interest (ROI) of an eye within a face. We also used the sub-block-based template matching for eye detection26,27 when the eye detection by AdaBoost failed. In our system, if there is no detected result by the AdaBoost method when searching the area of the eye, the eye detection by the AdaBoost method is considered a failure.

If the eye detection by sub-block-based template matching fails, the eye region is located by adaptive template matching.28 Sub-block-based template matching involves locating the eye candidate position by scanning a mask of 3×3 sub-blocks.25,26 At each scanning position of the mask, the sum of the differences between the gray average of the central sub-block and those of the surrounding eight sub-blocks is calculated as the matching value. The position where this matching value is maximized is considered to be the eye position.25,26 If the matching values of all the detected eye regions are less than a given threshold, our system determines that the sub-block-based template matching has failed to detect the eye region and performs adaptive template matching, as follows.

Once we have found the eye ROI in the eye detection step, we form an eye template image. In the next frame, we check the similarity between the eye template and the current frame image. If the matching score (similarity) is higher than the threshold, the detected region is regarded as same as the eye template image and the template is adaptively updated by the detected region in the current frame.28 If the matching score is less than the threshold, our system determines that the adaptive template matching has failed to detect the eye region, and the eye ROI detected in the previous frame is determined to be the current one.

2.2.2.

Obtaining I and K images from eye region

The colors of the human pupil and the eyelashes vary from light brown to black—this is usually darker than the color of the skin. To classify dark and bright pixels and detect eye openness and closure, we propose fuzzy-based segmentation. Two features are used as inputs to the fuzzy system. The first input is the intensity (I) of the hue saturation intensity (HSI) color space.29 The second input is black (K) from the CMYK color space.30 The two inputs are calculated by Eqs. (1) and (2), respectively:

Eq. (1)

I=13(R+G+B),

Eq. (2)

K=1MAX(R,G,B).

In Eqs. (1) and (2), R, G, and B are the red, green, and blue values of an RGB pixel. The I value is obtained by averaging R, G, and B. The K value is obtained by Eq. (2). The input values of a fuzzy system should range from 0 to 1, whereas the range of I and K is from 0 to 255. Therefore, we perform the normalization of I and K so as to make their range from 0 to 1. For that, we obtain the histograms of the I and K values, respectively, and model the histogram as a Gaussian distribution. Then we set the minimum and maximum boundaries of I and K for normalization based on a three-sigma range (99.7%)31 in order to exclude the noise value in the range for normalization. That is, we obtain 3σ and +3σ as the min and max bounds from the histogram, and these bounds are obtained for each eye image. If a pixel value of the I and K image is over the max bound, it is set to 1, and if the pixel value is under the min bound, it is 0. Equation (3) represents this method:

Eq. (3)

b(x,y)={0ifa(x,y)Minbound1ifa(x,y)Maxbound[a(x,y)Minbound](MaxboundMinbound)else,
where a(x,y) is a pixel value from the I or K image, and b(x,y) is the normalized pixel value. From Eq. (3), we obtain the two inputs whose ranges are from 0 to 1 for the fuzzy system. Figure 2 shows the examples of I and K images and their histograms.

Fig. 2

I and K images obtained from an RGB image, and their histograms. (a), (f), (k), and (p) RGB images. (b), (g), (l), and (q) I images. (c), (h), (m), and (r) K images. (d) Histogram of (b) (non-Gaussian shape). (e) Histogram of (c) (non-Gaussian shape). (i) Histogram of (g) (shape similar to Gaussian). (j) Histogram of (h) (shape similar to Gaussian). (n) Histogram of (l) (shape similar to Gaussian). (o) Histogram of (m) (shape similar to Gaussian). (s) Histogram of (q) (shape similar to Gaussian). (t) Histogram of (r) (shape similar to Gaussian).

OE_54_3_033103_f002.png

As shown in Figs. 2(d) and 2(e), although the histograms of some images are a little different from the Gaussian shape, most image histograms are quite similar to the Gaussian shape, as shown in Figs. 2(i), 2(j), 2(n), 2(o), 2(s), and 2(t). Therefore, our normalization based on a Gaussian shape does not produce an error. In addition, without this method of setting the minimum and maximum boundaries for normalization based on the assumption of a Gaussian shape, it is difficult to use other theoretical methods to determine the minimum and maximum boundaries. Therefore, we use the normalization method based on the assumption of a Gaussian shape in our research.

2.3.

Fuzzy-Based Eye Image Segmentation Method

2.3.1.

Definition of fuzzy membership functions

In general, it is difficult to determine an optimal threshold for image segmentation. In this study, we propose a fuzzy-based eye image segmentation method. On the basis of the assumption that the pupil and eyelashes are usually darker than the skin, we use the I and K values as the two inputs to the fuzzy system. Each of these inputs is in the range [0, 1]. The output value from the fuzzy system also ranges from 0 to 1.

The fuzzy membership functions used in this study are shown in Fig. 3. Generally, membership functions are used to represent the distribution of input or output values in a fuzzy system. As shown in Fig. 3, the low (L), middle (M), and high (H) areas are usually designed as being overlapped. We use these membership functions and fuzzy rules to obtain an output value through the defuzzification method. Table 2 shows the fuzzy rule table used in this study.

Fig. 3

Illustrations showing the input and output fuzzy membership functions: (a) the input fuzzy membership function for I value; (b) the input fuzzy membership function for K value; (c) output fuzzy membership function.

OE_54_3_033103_f003.png

Table 2

Fuzzy rule table for obtaining the output value of fuzzy system.

Input 1 (I value)Input 2 (K value)Output value
LLM
LMM
LHH
MLM
MMM
MHH
HLL
HML
HHL

As shown in Fig. 2, the eye region is usually darker than other areas, whose I and K values are low and high, respectively. Therefore, if the values of the I and K pixels are shown as lower and higher ones, respectively, the possibility that the given pixel belongs to the eye region is higher (H). On the contrary, if the values of the I and K pixels are shown as higher and lower ones, respectively, the possibility is lower (L). Based on these, we design the fuzzy rule table as shown in Table 2. Therefore, when the I and K values of the pixel are low and high, respectively, the output value is close to 1. In contrast, if the I and K values of the pixel are high and low, respectively, the output value is close to 0. However, when the environmental illuminator is bright, the I value of eye pixels could be brighter than normal. The effect of camera blurring can also cause the I value of eye pixels to be brighter than normal. Hence, we design the fuzzy rule table where the I value of eye pixels can be L or M with the K value of H.

2.3.2.

Obtaining the output value of fuzzy system by defuzzification method

As shown in Fig. 4(a), three outputs are obtained as f1(L), f1(M), and f1(H), respectively, through three membership functions of L, M, and H with input 1 (I value). In addition, three outputs are also obtained as f2(L), f2(M), and f2(H), respectively, through three membership functions of L, M, and H with input 2 (K value). For example, if the input value of I is 0.538, f1(L), f1(M), and f1(H) are 0.0, 0.924, and 0.076, respectively, as shown in Fig. 4(a). If the input value of K is 0.429, f2(L), f2(M), and f2(H) are 0.143, 0.857, and 0.0, respectively, as shown in Fig. 4(b). With these two pairs of three outputs, we can obtain the nine combinations of output [(f1(L), f2(L)], [f1(L), f2(M)], [f1(L), f2(H)], [f1(M), f2(L)], [f1(M), f2(M)], [f1(M), f2(H)], [f1(H), f2(L)], [f1(H), f2(M)], [f1(H), f2(H)] as shown in Table 3.

Fig. 4

Illustrations of obtaining the output of membership function: (a) outputs of input 1 (I value) and (b) outputs of input 2 (K value).

OE_54_3_033103_f004.png

Table 3

Illustration of nine combinations of output values and the determined IV by Min or Max rule.

Index of pairsOutput of f1(·)Output of f2(·)IV
Min ruleMax rule
10.0 (L)0.143 (L)0.0 (M)0.143 (M)
20.0 (L)0.857 (M)0.0 (M)0.857 (M)
30.0 (L)0.0 (H)0.0 (H)0.0 (H)
40.924 (M)0.143 (L)0.143 (M)0.924 (M)
50.924 (M)0.857 (M)0.857 (M)0.924 (M)
60.924 (M)0.0 (H)0.0 (H)0.924 (H)
70.076 (H)0.143 (L)0.076 (L)0.143 (L)
80.076 (H)0.857 (M)0.076 (L)0.857 (L)
90.076 (H)0.0 (H)0.0 (L)0.076 (L)

Then, based on the Min or Max rule and Table 2, we can obtain the output values.3234 For example, in the first row of Table 3, [f1(L), f2(L)] are 0.0 (L) and 0.143 (L), respectively. We take 0.0 and 0.143, respectively, if we use the Min and Max rule.3234 According to the fuzzy rule of Table 2, L and L become M. Therefore, we can finally obtain 0.0 (M) and 0.143 (M), respectively, by the Min and Max rule, as shown in Table 3. For convenience, we call these values of 0.0 (M) and 0.143 (M) as IVs in our paper.3234 Like this method, we can obtain the nine IVs as shown in Table 3. Then, the final output score can be calculated using the defuzzification method.3234,35 Detail explanations of the defuzzification method are as follows with Fig. 5.

Fig. 5

Illustration of the defuzzification methods used in this study: (a) first of maxima (FOM), last of maxima (LOM), middle of maxima (MOM), and mean of maxima (MeOM) and (b) weighted average method.

OE_54_3_033103_f005.png

Figure 5 illustrates a number of defuzzification operators. In this study, we consider five defuzzification operators as follows.3234 Figure 5(a) shows the first of maxima (FOM) operator, which is represented as the smallest output score (s2) obtained by the biggest IVs [IV(M) and IV(H)]. The last of maxima (LOM) returns the biggest output score (s4) obtained by the biggest IVs [IV(M) and IV(H)]. To obtain a result using the middle of maxima (MOM) operator, we take the average of the output scores by FOM and LOM as [(s2+s4)/2]. The mean of maxima (MeOM) operator is the average of all the output scores by the biggest IVs [IV(M) and IV(H)] as [(s2+s3+s4)/3]. In the original weighted average method, the output score (s5) is calculated by the weighted average method with the rectangles of R1, R2, and R3 of Fig. 5(b).35 Although the nine IVs are obtained by the Min or Max rule as shown in Table 3, the original weighted average method considers the rectangular regions defined only by the maximum IVs of each membership function [IV(M), IV(H), IV(L) of Fig. 5(b)], which cannot reflect the effect of other IVs. Therefore, we use RWAM, where all the rectangular regions by all the IVs are considered for calculating the output score.

The output scores given by defuzzification range from 0 to 1, and we rescale them from 0 to 255 by multiplying by 255. From that, we can obtain the eye image by the defuzzification method. As shown in Fig. 6, different images are produced depending on the selected defuzzification operator, Min or Max rule.

Fig. 6

Comparisons of images by fuzzy system: (a) original image; (b) by Min rule with FOM; (c) by Min rule with LOM; (d) by Min rule with MOM; (e) by Min rule with MeOM; (f) by Min rule with revised weighted average method (RWAM); (g) by Max rule with FOM; (h) by Max rule with LOM; (i) by Max rule with MOM; (j) by Max rule with MeOM; (k) by Max rule with RWAM.

OE_54_3_033103_f006.png

Examining the images in Fig. 6, we can find certain characteristics. There are the images where the eye regions can be more separable from other areas than the original image, like Figs. 6(b), 6(d), and 6(e). Therefore, in order to classify eye openness and closure, we transform the output image by the fuzzy system of Fig. 6 into the binarized one as shown in Fig. 7. We compared the performances of various binarization methods as shown in Tables 4 and 5 with Figs. 10 and 12.

Fig. 7

Results of postprocessing step in the case of open eye: (a) by Min rule with FOM; (b) binarized image of (a); (c) result by component labeling and size filtering of (b); (d) vertical histogram of (c).

OE_54_3_033103_f007.png

Table 4

Classification results table of eye openness and closure using database I (unit: %) (The smallest equal error rate is shown in bold type).

No.Fuzzy system appliedMin/max ruleDefuzzification methodBinarization threshold is obtained byType 1 errorType 2 errorEqual error rate
1×Gonzalez method24.3024.2024.25
2Otsu method35.2035.6035.40
3Static threshold9.109.209.15
4OMin ruleFOMGonzalez method26.8026.6026.70
5LOM49.2027.3038.25
6MOM6.206.106.15
7MeOM6.206.106.15
8RWAM33.2033.2033.20
9Max ruleFOM41.4041.4041.40
10LOM35.9035.7035.80
11MOM41.6041.6041.60
12MeOM38.3038.2038.25
13RWAM78.8022.9050.85
14Min ruleFOMOtsu method14.9014.9014.90
15LOM22.0021.9021.95
16MOM7.507.407.45
17MeOM7.507.407.45
18RWAM39.0039.0039.00
19Max ruleFOM50.0049.8049.90
20LOM42.4042.4042.40
21MOM46.3046.3046.30
22MeOM42.8042.5042.65
23RWAM34.7034.6034.65
24Min ruleFOMStatic threshold2.702.802.75
25LOM41.0041.2041.10
26MOM2.702.802.75
27MeOM2.702.802.75
28RWAM3.603.703.65
29Max ruleFOM52.1052.2052.15
30LOM43.8043.6043.70
31MOM52.3052.1052.20
32MeOM32.2031.9032.05
33RWAM5.505.505.50
FOM, first of maxima; LOM, last of maxima; MOM, middle of maxima; MeOM, mean of maxima; RWAM, revised weighted average method.

Table 5

Classification results for eye openness and closure using database II (unit: %) (The smallest equal error rate is shown in bold type).

No.Fuzzy system appliedMin/max ruleDefuzzification methodBinarization threshold is obtained byType 1 errorType 2 errorEqual error rate
1×Gonzalez method19.8019.6019.70
2Otsu method53.3053.2053.25
3Static threshold31.5031.8031.65
4OMin ruleFOMGonzalez method38.6038.6038.60
5LOM60.0034.0047.00
6MOM14.8014.4014.60
7MeOM14.6014.8014.70
8RWAM54.9054.6054.75
9Max ruleFOM51.0050.9050.95
10LOM47.9047.8047.85
11MOM50.4050.7050.55
12MeOM48.2046.6047.40
13RWAM73.6024.9049.25
14Min ruleFOMOtsu method10.2010.5010.35
15LOM41.9041.9041.90
16MOM13.0013.2013.10
17MeOM13.0013.2013.10
18RWAM49.8049.7049.75
19Max ruleFOM45.5045.4045.45
20LOM46.4046.4046.40
21MOM47.0047.0047.00
22MeOM45.6045.6045.60
23RWAM53.2053.2053.20
24Min ruleFOMStatic threshold10.4010.5010.45
25LOM51.7051.8051.75
26MOM10.4010.5010.45
27MeOM10.4010.5010.45
28RWAM15.7015.5015.60
29Max ruleFOM45.5045.4045.45
30LOM53.4053.6053.50
31MOM45.8046.0045.90
32MeOM40.4040.4040.40
33RWAM11.6011.5011.55

Fig. 8

Results of postprocessing step in the case of closed eye: (a) by Min rule with FOM; (b) binarized image of (a); (c) result by component labeling and size filtering of (b); (d) vertical histogram of (c).

OE_54_3_033103_f008.png

Fig. 9

Experimental setup and collected images: (a) experimental setup; (b) collected images (left image is the original one captured at the Z distance of 2 m, and right three images are the cropped face ones from the left image); (c) collected images (left image is the original one captured at the Z distance of 2.5 m, and right three images are the cropped face ones from the left image).

OE_54_3_033103_f009.png

2.4.

Classifying Eye Openness and Closure from a Binary Eye Image

With the binarized image, we perform component labeling. Figure 7(a) shows the image resulting from the Min rule with FOM. The image is easy to separate into regions of eye and skin. Figure 7(b) is a binarized one of Fig. 7(a) by static threshold. As shown in Fig. 7(c), we eliminate small noise areas using component labeling. Next, we obtain the biggest area from the binarized blobs as the eye region.

With the binarized image of Fig. 7(c), we can obtain the vertical histogram as shown in Fig. 7(d). As shown in Fig. 7(d), the mid area of the histogram shows a higher value, whereas the side ones show a lower value in the case of an open eye. However, both the mid and side areas of the histogram show the low value in the case of a closed eye as shown in Fig. 8.

Based on these, we use the standard deviation of the histogram (the length of the black pixels in the vertical direction) as the features for classifying open and closed eyes. If the standard deviation is above a specific threshold, we determine the state of the eye as openness. Otherwise, the eye is assumed to be closed.

3.

Experimental Results

3.1.

Experimental Results with Database I

To experimentally verify our classification method for eye openness and closure, we collected 6336 open eye images and 6294 closed eye images (database I). We captured the images from a distance of 2 to 2.5 m in an indoor environment, where each person watched TV at a distance. The images of database I were obtained using a Logitech C600 web camera equipped with a zoom lens, and the image resolution is 1600×1200pixels.36 The camera is positioned below the TV. Figure 9 shows the experimental setup and the examples of collected images.

To measure the accuracy of the classification of eye openness and closure, we conducted two experiments without and with our fuzzy-based method. As explained in Sec. 2.4, the image is binarized after the fuzzy-based fusion of I and K images in our research. Therefore, we compared three binarization methods: Gonzalez algorithm,37 Otsu algorithm,38 and static threshold29 in the same image by the fuzzy-based method for fair comparisons. Table 4 lists the complete set of results.

We compared the equal error rate (EER) of all methods. The EER is calculated as the error rate at the moment when the type 1 error is most similar to the type 2 error. The type 1 error indicates the error rate of open eye images being incorrectly determined as closed eye images. The type 2 error denotes the error rate of closed eye images being incorrectly determined as open eye images. In the cases of the Min rule with LOM using Gonzalez method and the Max rule with RWAM using Gonzalez method, there exists a large difference between the EER and type 1 (type 2) error. That is because the type 1 and 2 error cases do not occur continuously according to the change of threshold for discriminating open and closed eyes.

As the results show, the lowest EER values were obtained by the Min rule with FOM, Min rule with MOM, and Min rule with MeOM using a static threshold through the fuzzy system. From these, we can confirm that the proposed method is superior to others.

Figure 10 shows the receiver operating characteristic (ROC) curves for the 10 highest ranked EER results obtained from Table 4. As shown in Fig. 10, the Min rule with FOM and static threshold, Min rule with MOM and static threshold, and Min rule with MeOM and static threshold outperformed the other methods. In case of the images by the Min rule with FOM, Min rule with MOM, and Min rule with MeOM, the same images are obtained through the binarization with static threshold, and consequent ROC curves for classifying the eye openness and closure are same as shown in Fig. 10.

Fig. 10

Receiver operating characteristic (ROC) curves for the 10 highest ranked equal error rate (EER) results from Table 4.

OE_54_3_033103_f010.png

3.2.

Experimental Results with Database II

In order to measure the effect of the kind of database on the performance of our method, we conducted a further experiment on the classification of eye openness and closure using an open database (ZJU Eyeblink Database).39 This database has 80 video clips with a resolution of 320×240pixels. We used 20 of these video clips excluding the clips of wearing glasses and the images where eye detection failed. The resulting image set contained a total of 5376 images, with 4891 open eye images and 485 closed eye images (database II). Figure 11 shows the examples of images from open database.

Fig. 11

Examples of images from ZJU Eyeblink Database.

OE_54_3_033103_f011.png

Table 5 presents the EER results obtained from database II. The EER results of the methods using our fuzzy system are better than those given by other methods.

In the cases of the Min rule with LOM using the Gonzalez method and the Max rule with RWAM using the Gonzalez method, there exists a large difference between the EER and type 1 (type 2) error. That is because the type 1 and 2 error cases do not occur continuously according to the change of threshold for discriminating open and closed eyes.

Figure 12 shows the ROC curves for the 10 highest ranked EER results obtained from Table 5. Although the EER by the Min rule with FOM and Otsu method is lowest as shown in Table 5, Fig. 12 shows that the overall accuracies by the Min rule with FOM and static threshold, Min rule with MOM and static threshold, and Min rule with MeOM and static threshold are highest in terms of the ROC curves such as the results of Fig. 10. From Fig. 12, we can confirm that the proposed fuzzy-based method outperformed other methods. In case of the images by the Min rule with FOM, Min rule with MOM, and Min rule with MeOM, the same images are obtained through the binarization with a static threshold, and the consequent ROC curves for classifying the eye openness and closure are same as shown in Fig. 12.

Fig. 12

ROC curves for the 10 highest ranked EER results from Table 5.

OE_54_3_033103_f012.png

3.3.

Experimental Results Analysis

Figure 13 shows the images that resulted in good classification of eye openness and closure with database I. As shown in the images of Fig. 13(d), the result images of open eye are shown as discriminated from those of closed eye, and the consequent feature values [standard deviations of histogram (the lengths of the black pixels in the vertical direction)] of the open eye are larger than those of the closed eye. Therefore, the open eye can be discriminated from the closed eye.

Fig. 13

Images of good classification results (database I): (a) original images, (b) images by Min rule with FOM and static threshold, (c) binarized images of (b), (d) component labeling results of (c). (The left three images of each row are open eyes, whereas the right three ones are closed eyes).

OE_54_3_033103_f013.png

Figure 14 shows the images that resulted in bad classification of eye openness and closure with database I. In the case of open eye images, bad classification occurred when the eye image is too blurred [the first and second images of the left three ones of Fig. 14(a)], or when reflections exist in the eyeball [the third image of the left three ones of Fig. 14(a)].

Fig. 14

Images of bad classification results (database I): (a) original images, (b) images by Min rule with FOM and static threshold, (c) binary images of (b), (d) component labeling results of (c). (The left three images of each row are open eyes, whereas the right three ones are closed eyes).

OE_54_3_033103_f014.png

In the case of closed eye images, bad classification was due to image blurring [the first image of the right three ones of Fig. 14(a)], incorrect detection of eye region [the second image of the right three ones of Fig. 14(a)], or the incorrect selection of eyebrow by component labeling [the third image of the right three ones of Fig. 14(a)].

Figure 15 shows the images that resulted in good classification of eye openness and closure with database II. As shown in the images of Fig. 15(d), the result images of the open eye are shown as discriminated from those of the closed eye, and consequent feature values [standard deviations of histogram (the lengths of the black pixels in the vertical direction)] of the open eye are larger than those of the closed eye. Therefore, the open eye can be discriminated from the closed eye.

Fig. 15

Images of good classification results (database II): (a) original images, (b) images by Min rule with FOM and static threshold, (c) binarized images of (b), (d) component labeling results of (c). (The left three images of each row are open eyes, whereas the right three ones are closed eyes).

OE_54_3_033103_f015.png

Figure 16 shows the images that resulted in bad classification of eye openness and closure with database II. In the case of open eye images, bad classification occurred when eyelid pixels were disconnected by image blurring [the first image of the left three ones of Fig. 16(a)], the eye is not widely opened and the eyebrow is incorrectly selected by component labeling [the second image of the left three ones of Fig. 16(a)], or the eye image is too dark [the third image of the left three ones of Fig. 16(a)].

Fig. 16

Images of bad classification results (database II): (a) original images, (b) images by Min rule with FOM and static threshold, (c) binary images of (b), (d) component labeling results of (c). (The left three images of each row are open eyes, whereas the right three ones are closed eyes).

OE_54_3_033103_f016.png

In the case of closed eye images, bad classification happens when the eye is not completely closed [the first image of the right three ones of Fig. 16(a)], the eye region is not correctly detected [the second image of the right three ones of Fig. 16(a)], or the eye image is too dark and the eyebrow is incorrectly selected by component labeling [the third image of the right three ones of Fig. 16(a)].

It is usually difficult to evaluate the accuracy of eye segmentation because all the pixels of an accurate eye region should be manually obtained as the ground-truth data. Therefore, in our research, we measured good and bad responses of Figs. 1316 based on the error of classifying eye openness and closure in Tables 4 and 5. That is, the good response means that the open or closed eye is correctly classified as an open or closed one, respectively, by our method. The bad response means that the open or closed eye is incorrectly classified as a closed or open one, respectively, by our method.

As shown in Eqs. (1) and (2), I is obtained by averaging R, G, and B, whereas K is obtained by selecting the maximum value among R, G, and B, and subtracting it from 1. For example, in the case of a gray pixel of medium level, we can assume that R, G, and B are 1, 0.5, and 0, respectively, if the range of R, G, and B is from 0 to 1, respectively. From that, I of Eq. (1) is 0.5 (1.5/3), whereas K of Eq. (2) is 0. In the case of white (R=G=B=1), I is 1 (3/3), whereas K is 0. Furthermore, in the case of black (R=G=B=0), I is 0 (0/3), whereas K is 1.

By comparing these three cases, the gray pixel of the medium level (R=1, G=0.5, B=0) is represented as the white in K value (K=0). whereas it is the gray pixel of the medium level in the I value (I=0.5). The gray level of the surrounding skin of an eye can be regarded as the gray pixel of the medium level because its gray level is lower than the bright sclera and higher than the dark eyeball as shown in Fig. 17(a). Therefore, K has the effect of making the gray pixel of the surrounding skin of eye close to white while maintaining the level of the dark eyeball. Consequently, the contrast between the surrounding skin of an eye and the dark eyeball in the K image is increased more than that in the I image, which can enhance the accuracy of segmenting the eyeball from the area of surrounding skin of the eye.

Fig. 17

Comparisons of binarization with I, K, and the image from our fuzzy-based method in the case of a large number of eyelashes in the original image: (a) original images of open and closed eyes, (b) an open eye, and (c) a closed eye.

OE_54_3_033103_f017.png

However, the K image has the disadvantage of making the gray pixel of the eyelid, eyelashes, and shadows (whose gray levels are also higher than the dark eyeball) close to white, which can cause the eyelid line to be erroneously segmented from the surrounding skin of the eye.

Therefore, we combine the I and K images using a fuzzy method, which enhances the advantages of both I (less affected by eyelid, eyelashes, and shadows) and K (enhancing the contrast between the eyeball and surrounding skin of eye). Thus, we can improve the final accuracy of the eye segmentation and the determination of whether the eye is open or closed, which is less affected by eyelashes and shadows based on fuzzy-based combining method.

We included experiments to show that these claims are correct as follows. Figures 17 and 18 show the comparisons of binarization with I, K, and the image from our fuzzy-based method in the case of a large number of eyelashes and shadows in the original image, respectively. We, thus, demonstrate that a more accurate binarized image of the eye region can be obtained by our fuzzy-based method (Min rule FOM) than by the I or K images.

Fig. 18

Comparisons of binarization with I, K, and the image from our fuzzy-based method when the shadows around the eye are included in the original image, for the cases of (a) an open eye, and (b) and (c) a closed eye.

OE_54_3_033103_f018.png

In addition, as shown in Table 4 and Fig. 10 (database I), the average EER of the classification of eye openness and closure by our fuzzy-based combination method is 2.75%, which is much smaller than that obtained when not combining the I and K images (9.15%). In addition, as shown in Table 5 and Fig. 12 (database II), the average EER of the classification of eye openness and closure by our fuzzy-based combination method is 10.35%, which is also much smaller than that obtained when not combining the I and K images (19.70%). Thus, we found that our fuzzy-based combination method outperforms that using either the I or K image without combining them.

We included explanations and experiments for other races, especially African-Americans. Experiments were performed with 208 images of two African-Americans. As shown in Fig. 19, we found that a more accurate binarized image of the eye region can be obtained by our fuzzy-based method (Min rule FOM) than from the I or K images. In addition, the average EER of the classification of eye openness and closure by our fuzzy-based combination method is 2.8%, which is similar to that of Table 4 using database I. Therefore, we concluded that our fuzzy-based combination method is robust to images of other races.

Fig. 19

Comparisons of binarization with I, K, and the image from our fuzzy-based method in the case of African-Americans: (a) original captured image, and the cases of (b) and (c) an open eye, and (d) and (e) a closed eye.

OE_54_3_033103_f019.png

We included the explanations and experiments in the case of pose variations (head rotation). Experiments were performed with 213 images of pose variations. As shown in Fig. 20, we found that a more accurate binarized image of the eye region can be obtained by our fuzzy-based method (Min rule FOM) than from the I or K images. In addition, the average EER of the classification of eye openness and closure by our fuzzy-based combination method is 2.9%, which is similar to that of Table 4 using database I. Thus, we found that our fuzzy-based combination method is robust to images of varying poses.

Fig. 20

Comparisons of binarization with I, K, and the image from our fuzzy-based method in the case of pose variations (head rotation): (a) original captured image, (b) an open eye, and (c) a closed eye.

OE_54_3_033103_f020.png

We included explanations and experiments in the case of users wearing glasses. Experiments were performed with 304 images of users wearing glasses. As shown in Fig. 21, a more accurate binarized image of the eye region can be obtained by our fuzzy-based method (Min rule FOM) than from the I or K images. In addition, the average EER of the classification of eye openness and closure by our fuzzy-based combination method is 3%, which is similar to that of Table 4 using database I. Therefore, we found that our fuzzy-based combination method is robust to images of users with glasses.

Fig. 21

Comparisons of binarization with I, K, and an image by our fuzzy-based method in the case of users wearing glasses: (a) shows an open eye, and (b) and (c) show a closed eye.

OE_54_3_033103_f021.png

To obtain the eye region from the input image, we first detect the face region in the input image. We used the widely used AdaBoost method for face detection.25 AdaBoost is used to detect the ROI of the eye within a face. Rather than performing additional training procedures for the AdaBoost method with our own database, we used the AdaBoost algorithm provided from the OpenCV library (version 2.3.1), which was already trained.40

As shown in Fig. 23, the eyes are so small that detection errors and processing time increase if the AdaBoost method is used to directly detect the eye regions from the entire image. We compared the results of eye detection by our method (eye is detected within the ROI of the eye in a detected face region), as shown in Fig. 22, and by the method where the eye is located in the entire image without face detection, as shown in Fig. 23. As shown in Fig. 23, the error cases of eye region incorrectly detected occur, whereas there is no error in Fig. 22.

Fig. 22

Examples of correct face and eye detection using our method.

OE_54_3_033103_f022.png

Fig. 23

Examples of incorrect eye detection in the entire image without face detection.

OE_54_3_033103_f023.png

We also compared the processing time of eye detection using our method and for the method where the eye is located in the entire image without face detection. Experimental results showed that the processing time for eye detection in the latter method was 1.102 s, which is much longer than in our method. [The processing time including face (58.67 ms) and eye (12.70 ms) detection as 71.37ms, as shown in Table 6.) Therefore, we performed eye detection within the ROI of a face.

Table 6

Processing time for each step of the proposed method per image (unit: ms).

Each stepProcessing time
Face detection58.67
Eye region of interest detection12.70
Determine normalization bounds2.348
Obtain I and K images0.0
Output value by fuzzy system5.86
Binarize the segmented image5.342
Component labeling3.140
Vertical projection1.002
Calculating the standard deviation, and classifying open and closed eyes0.0
Total89.062

We measured the accuracies of face and eye detection using our method. The accuracies are measured based on Eqs. (4) and (5):

Eq. (4)

Recall=NtpM,

Eq. (5)

Precision=NtpNtp+Nfp,
where M is the total number of faces (or eyes) in the images, Ntp is the number of true positives, and Nfp is the number of false positives. True positives mean that the faces (or eyes) were detected correctly, while false positives represent cases where nonfaces (or noneyes) were incorrectly detected as faces (or eyes). If the recall value is close to 1, the accuracy of the face (or eye) detection process is high. If the precision value is 1, all of the detected face (or eye) regions are correct with zero false positives (Nfp=0). Experimental results with the images from database I showed that the recall and precision of face detection by our method were 100 and 100%, respectively. In addition, the recall and precision of eye detection by our method were 99.8 and 99.5%, respectively. The examples of face and eye detection by our method are shown in Fig. 22.

In our research, although we use conventional fuzzy membership functions and defuzzification methods (FOM, LOM, MOM, and MeOM), we newly propose the fuzzy rule table shown in Table 2, which reflects the characteristics of I and K values for accurate eye segmentation. In addition, we newly propose RWAM (where all rectangular regions from all IVs are considered for calculating the output score, rather than maximum IVs) as the defuzzification method and compare the performances. Figures 3(a) and 3(b) show the input fuzzy membership functions for I and K values, respectively. Figure 3(c) represents the output fuzzy membership functions. In all cases, we used a simple linear (triangular) function. In previous studies, these membership functions were defined by the heuristic experiences of the researcher (not by experiments). We used the linear (triangular) membership function because it has been widely used in fuzzy-based methods4143 to consider the processing time and complexity of the problem to be solved. By determining the membership functions and fuzzy rule table heuristically (not experimentally), the conventional fuzzy-based method has the advantages of not requiring additional training procedures, which take considerable processing time, and being less affected by the type of training database.

In our research, we used only three thresholds (parameters). The first is the threshold for determining whether the eye detection was successful by sub-block-based template matching or adaptive template matching. The second is the threshold for binarization of the static threshold method of Sec. 2.4 and Table 4. The last is the threshold for determining eye openness and closure based on the standard deviation of Sec. 2.4. All of these thresholds were empirically determined by test and trial. The recall and precision of eye detection by our method were 99.8 and 99.5%, respectively, from which we conclude that the first threshold is appropriate. As shown in Table 4, the static threshold method using the second threshold outperforms the binarization of the Otsu and Gonzalez methods, from which we conclude that the second threshold is appropriate. As shown in Tables 4 and 5, our fuzzy-based method of determining eye openness and closure outperforms the other methods, from which we conclude that the third threshold is appropriate.

In our research, we performed experiments with two databases (databases I and II) in order to measure the performance of our method in various database environments (image resolution, Z distance between camera and user, etc.). As shown in Figs. 9(b), 9(c), and 22, although the image resolution of database I is as large as 1600×1200pixels, the image resolution of the eye region is small, as small as 25×12pixels, because the Z distance between the camera and the user was large when collecting this database. As shown in Fig. 11, the image resolution of database II is 320×240pixels, and the image resolution of the eye region is also as small as 25×12pixels. Although the image resolution of database II is smaller than that of database I, the resolutions of the eye region of databases I and II are similar because the Z distance between the camera and user is much larger in database I.

Most widely used face databases include eye regions with higher image resolution. In the case of the PAL database,44 the eye region is larger than 60×30 or 90×45pixels. In the case of the AR database,45 the eye region is larger than 55×25pixels. Therefore, the image resolution of the eye regions of the databases in our experiments is lower than that of other face databases.44,45

As shown in Table 4, the error of determining eye openness and closure is 2.75% by our method. This means that about two or three frames per 100 successive frames of valid open eyes are incorrectly determined as closed eyes and skipped. In addition, about two or three frames among 100 successive frames of closed eyes are incorrectly determined as open eyes.

3.4.

Processing Time of the Proposed Method

To measure the processing time of the proposed method, we utilized a desktop computer with an Intel Core i7 processor 975 at 3.33GHz and 8 GB RAM. Table 6 lists the measurement results of the processing time with database I according to each step of Fig. 1. For measuring the processing time, the Min rule with FOM and static threshold is used because it gives the best performance on the experiment results. As a result, the average processing time of each image was 89.062ms; we, thus, confirm that our method can be operated at a fast speed [11.2frames/s (1000/89.062)].

4.

Conclusion

We have studied an eye image classification method based on a fuzzy logic system. The proposed method uses I and K color information from the HSI and CMYK color spaces, respectively, for eye segmentation. The eye region is binarized using the fuzzy logic system based on I and K inputs. Through the fuzzy logic system, the combined image of I and K pixels is obtained. In order to reflect the effect by all the IVs on calculating the output score of the fuzzy system, we use RWAM, where all the rectangular regions by all the IVs are considered for calculating the output score. Then, the final classification of eye openness or closure is made based on the standard deviation of the vertical pixel length calculated from the binarized image. In our research, the classification of eye openness or closure is successfully made with eye images of low resolution, which are captured in the environment of people watching TV at a distance. By using the fuzzy logic system, our method does not require the additional procedure of training irrespective of the database. Through the evaluations with two databases, we can confirm that our method is superior to other methods.

In the future, we would research the method of enhancing the performance of classifying open and closed eyes by combining our fuzzy-based method and other training-based ones. In addition, we would enhance the performance of eye detection, which can affect the performance of classifying open and closed eyes.

Acknowledgments

This work was supported by the research program of Dongguk University, 2014.

References

1. 

J. Clark, “Will your next car wake you up when you fall asleep at the wheel?,” (2008) http://auto.howstuffworks.com/car-driving-safety/safety-regulatory-devices/car-wake-you-up.htm May 2015). Google Scholar

2. 

J. Jo et al., “Vision-based method for detecting driver drowsiness and distraction in driver monitoring system,” Opt. Eng., 50 127202 (2011). http://dx.doi.org/10.1117/1.3657506 OPEGAR 0091-3286 Google Scholar

3. 

C.-S. Hsieh and C.-C. Tai, “An improved and portable eye-blink duration detection system to warn of driver fatigue,” Instrum. Sci. Technol., 41 429 –444 (2013). http://dx.doi.org/10.1080/10739149.2013.796560 ISCTEF 1073-9149 Google Scholar

4. 

N. Sharma and V. K. Banga, “Drowsiness warning system using artificial intelligence,” World Acad. Sci., Eng. Technol., 4 647 –649 (2010). Google Scholar

5. 

W. O. Lee et al., “Minimizing eyestrain on a liquid crystal display considering gaze direction and visual field of view,” Opt. Eng., 52 073104 (2013). http://dx.doi.org/10.1117/1.OE.52.7.073104 OPEGAR 0091-3286 Google Scholar

6. 

L. Chittaro and R. Sioni, “Exploring eye-blink startle response as a physiological measure for affective computing,” in Proc. of Humaine Association Conf. on Affective Computing and Intelligent Interaction, 227 –232 (2013). Google Scholar

7. 

H. Tada et al., “Eye-blink behaviors in 71 species of primates,” PLoS One, 8 1 –9 (2013). http://dx.doi.org/10.1371/journal.pone.0066018 1932-6203 Google Scholar

8. 

B. Champaty, K. Pal and A. Dash, “Functional electrical stimulation using voluntary eyeblink for foot drop correction,” in Proc. of Int. Conf. on Microelectronics, Communication and Renewable Energy, 1 –4 (2013). Google Scholar

9. 

B. Ashtiani and I. S. MacKenzie, “BlinkWrite2: an improved text entry method using eye blinks,” in Proc. of Eye Tracking Research & Applications Symp., 339 –346 (2010). Google Scholar

10. 

A. Bulling et al., “Eye movement analysis for activity recognition using electrooculography,” IEEE Trans. Pattern Anal. Mach. Intell., 33 741 –753 (2011). http://dx.doi.org/10.1109/TPAMI.2010.86 ITPIDJ 0162-8828 Google Scholar

11. 

P. Campadelli, R. Lanzarotti and G. Lipori, “Eye localization: a survey,” Fundamentals of Verbal and Nonverbal Communication and the Biometric Issues, 234 –245 2007). Google Scholar

12. 

M. Lalonde et al., “Real-time eye blink detection with GPU-based SIFT tracking,” in Proc. of the Fourth Canadian Conf. on Computer and Robot Vision, 481 –487 (2007). Google Scholar

13. 

J. Mohanakrishnan et al., “A novel blink detection system for user monitoring,” in Proc. of the 1st IEEE Workshop on User-Centered Computer Vision, 37 –42 (2013). Google Scholar

14. 

W. O. Lee, E. C. Lee and K. R. Park, “Blink detection robust to various facial poses,” J. Neurosci. Methods, 193 356 –372 (2010). http://dx.doi.org/10.1016/j.jneumeth.2010.08.034 JNMEDT 0165-0270 Google Scholar

15. 

I. Bacivarov, M. Ionita and P. Corcoran, “Statistical models of appearance for eye tracking and eye-blink detection and measurement,” IEEE Trans. Consum. Electron., 54 1312 –1320 (2008). http://dx.doi.org/10.1109/TCE.2008.4637622 ITCEDA 0098-3063 Google Scholar

16. 

E. Missimer and M. Betke, “Blink and wink detection for mouse pointer control,” in Proc. of the 3rd Int. Conf. on Pervasive Technologies Related to Assistive Environments, 23:1 –23:8 (2010). Google Scholar

17. 

E. Miluzzo, T. Wang and A. T. Campbell, “EyePhone: activating mobile phones with your eyes,” in Proc. of the Second ACM SIGCOMM Workshop on Networking, Systems, and Applications on Mobile Handhelds, 15 –20 (2010). Google Scholar

18. 

J. Wu and M. M. Trivedi, “An eye localization, tracking and blink pattern recognition system: algorithm and evaluation,” ACM Trans. Multimed. Comput. Commun. Appl., 6 8:1 –8:23 (2010). http://dx.doi.org/10.1145/1671962 1551-6857 Google Scholar

19. 

A. A. Lenskiy and J.-S. Lee, “Driver’s eye blinking detection using novel color and texture segmentation algorithms,” Int. J. Control Autom. Syst., 10 317 –327 (2012). http://dx.doi.org/10.1007/s12555-012-0212-0 1598-6446 Google Scholar

20. 

L. Hoang, D. Thanh and L. Feng, “Eye blink detection for smart glasses,” in Proc. of IEEE Int. Symp. on Multimedia, 305 –308 (2013). Google Scholar

21. 

L. C. Trutoiu et al., “Modeling and animating eye blinks,” ACM Trans. Appl. Percept., 8 17:1 –17:17 (2011). http://dx.doi.org/10.1145/2010325 1544-3558 Google Scholar

22. 

C. Colombo, D. Comanducci and A. D. Bimbo, “Robust tracking and remapping of eye appearance with passive computer vision,” ACM Trans. Multimed. Comput. Commun. Appl., 3 20:1 –20:20 (2007). http://dx.doi.org/10.1145/1314303 1551-6857 Google Scholar

23. 

C.-C. Chiang et al., “A novel method for detecting lips, eyes and faces in real time,” Real-Time Imaging, 9 277 –287 (2003). http://dx.doi.org/10.1016/j.rti.2003.08.003 1077-2014 Google Scholar

24. 

B. Cyganek and S. Gruszczyński, “Hybrid computer vision system for drivers’ eye recognition and fatigue monitoring,” Neurocomputing, 126 78 –94 (2014). http://dx.doi.org/10.1016/j.neucom.2013.01.048 NRCGEO 0925-2312 Google Scholar

25. 

P. Viola and M. J. Jones, “Robust real-time face detection,” Int. J. Comput. Vis., 57 137 –154 (2004). http://dx.doi.org/10.1023/B:VISI.0000013087.49260.fb IJCVEQ 0920-5691 Google Scholar

26. 

B.-S. Kim, H. Lee and W.-Y. Kim, “Rapid eye detection method for non-glasses type 3D display on portable devices,” IEEE Trans. Consum. Electron., 56 2498 –2505 (2010). http://dx.doi.org/10.1109/TCE.2010.5681133 ITCEDA 0098-3063 Google Scholar

27. 

H. Heo et al., “Nonwearable gaze tracking system for controlling home appliance,” Sci. World J., 2014 1 –20 (2014). http://dx.doi.org/10.1155/2014/303670 THESAS 2356-6140 Google Scholar

28. 

W. O. Lee et al., “New method for face gaze detection in smart television,” Opt. Eng., 53 053104 (2014). http://dx.doi.org/10.1117/1.OE.53.5.053104 OPEGAR 0091-3286 Google Scholar

29. 

R. C. Gonzalez and R. E. Woods, Digital Image Processing, 3rd ed.Prentice Hall, New Jersey (2010). Google Scholar

30. 

R. Crane, A Simplified Approach to Image Processing: Classical and Modern Techniques in C, Prentice Hall, New Jersey (1996). Google Scholar

31. 

B. Narasimhan, “The normal distribution,” (1996) http://statweb.stanford.edu/~naras/jsm/NormalDensity/NormalDensity.html July 2015). Google Scholar

32. 

C. W. Cho et al., “Binocular gaze detection method using a fuzzy algorithm based on quality measurements,” Opt. Eng., 53 053111 (2014). http://dx.doi.org/10.1117/1.OE.53.5.053111 OPEGAR 0091-3286 Google Scholar

33. 

G. P. Nam and K. R. Park, “New fuzzy-based retinex method for the illumination normalization of face recognition,” Int. J. Adv. Robot. Syst., 9 (103), 1 –9 (2012). http://dx.doi.org/10.5772/51664 Google Scholar

34. 

K. Y. Shin et al., “Finger-vein image enhancement using a fuzzy-based fusion method with Gabor and retinex filtering,” Sensors, 14 3095 –3129 (2014). http://dx.doi.org/10.3390/s140203095 SNSRES 0746-9462 Google Scholar

35. 

T. J. Ross, Fuzzy Logic with Engineering Applications, Willey, New Jersey (2010). Google Scholar

36. 

Webcam C600,” (2014) http://www.logitech.com/en-us/support/webcams/5869 October ). 2014). Google Scholar

37. 

A. Pérez and R. C. Gonzalez, “An iterative thresholding algorithm for image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., PAMI-9 742 –751 (1987). http://dx.doi.org/10.1109/TPAMI.1987.4767981 ITPIDJ 0162-8828 Google Scholar

38. 

N. Otsu, “A threshold selection method from gray-level histograms,” IEEE Trans. Syst. Man Cybern., SMC-9 (1), 62 –66 (1979). http://dx.doi.org/10.1109/TSMC.1979.4310076 ITSHFX 1083-4427 Google Scholar

39. 

G. Pan et al., “Eyeblink-based anti-spoofing in face recognition from a generic webcamera,” 1 –8 http://dx.doi.org/10.1109/ICCV.2007.4409068 Google Scholar

40. 

OpenCV,” (2015) http://www.opencv.org January ). 2015). Google Scholar

41. 

B. S. Bayu and J. Miura, “Fuzzy-based illumination normalization for face recognition,” in Proc. of IEEE Workshop on Advanced Robotics and Its Social Impacts, 131 –136 (2013). Google Scholar

42. 

A. Barua, L. S. Mudunuri and O. Kosheleva, “Why trapezoidal and triangular membership functions work so well: towards a theoretical explanation,” J. Uncertain Syst., 8 164 –168 (2014). Google Scholar

43. 

J. Zhao and B. K. Bose, “Evaluation of membership functions for fuzzy logic controlled induction motor drive,” in Proc. of IEEE Annual Conf. of the Industrial Electronics Society, 229 –234 (2002). Google Scholar

44. 

The PAL Face Database,” (2015) http://agingmind.utdallas.edu/facedb January ). 2015). Google Scholar

45. 

AR Face Database,” (2015) http://www2.ece.ohio-state.edu/~aleix/ARdatabase.html January ). 2015). Google Scholar

Biography

Ki Wan Kim received his BS in computer science from Sangmyung University, Seoul, South Korea, in 2012. He is currently pursuing his master’s course in electronics and electrical engineering at Dongguk University. His research interests include image processing and gaze tracking.

Won Oh Lee received his BS degree in electronics engineering from Dongguk University, Seoul, South Korea, in 2009. He received the combined course of MS and PhD degrees in electronics and electrical engineering at Dongguk University in 2014. He is a senior researcher in Hyundai Morbis. His research interests include biometrics and pattern recognition.

Yeong Gon Kim received his BS and MS degrees in computer engineering and electronics and electrical engineering from Dongguk University, Seoul, South Korea, in 2011 and 2013, respectively. He is currently pursuing his PhD degree in electronics and electrical engineering at Dongguk University. His research interests include biometrics and pattern recognition.

Hyung Gil Hong received his BS degree in electronics engineering from Dongguk University, Seoul, South Korea, in 2012. He received his master’s degree in electronics and electrical engineering at Dongguk University in 2014. He is currently pursuing his PhD degree in electronics and electrical engineering at Dongguk University. His research interests include biometrics and pattern recognition.

Eui Chul Lee received his BS degree in software in 2005, and his master’s and PhD degrees in computer science in 2007 and 2010, respectively, from Sangmyung University, Seoul, South Korea. He is currently an assistant professor in the Department of Computer Science at Sangmyung University. His research interests include computer vision, biometrics, image processing, and HCI.

Kang Ryoung Park received his BS and MS degrees in electronic engineering from Yonsei University, Seoul, South Korea, in 1994 and 1996, respectively. He received his PhD degree in electrical and computer engineering from Yonsei University in 2000. He has been a professor in the division of electronics and electrical engineering at Dongguk University since March 2013. His research interests include image processing and biometrics.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Ki Wan Kim, Won Oh Lee, Yeong Gon Kim, Hyung Gil Hong, Eui Chul Lee, and Kang Ryoung Park "Segmentation method of eye region based on fuzzy logic system for classifying open and closed eyes," Optical Engineering 54(3), 033103 (3 March 2015). https://doi.org/10.1117/1.OE.54.3.033103
Published: 3 March 2015
Lens.org Logo
CITATIONS
Cited by 14 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Eye

Image segmentation

Fuzzy systems

Databases

Fuzzy logic

Image classification

Laminated object manufacturing

RELATED CONTENT

Face detection using region information
Proceedings of SPIE (August 25 2004)

Back to Top