Most driver-monitoring systems have attempted to detect either driver drowsiness or distraction, although both factors should be considered for accident prevention. Therefore, we propose a new driver-monitoring method considering both factors. We make the following contributions. First, if the driver is looking ahead, drowsiness detection is performed; otherwise, distraction detection is performed. Thus, the computational cost and eye-detection error can be reduced. Second, we propose a new eye-detection algorithm that combines adaptive boosting, adaptive template matching, and blob detection with eye validation, thereby reducing the eye-detection error and processing time significantly, which is hardly achievable using a single method. Third, to enhance eye-detection accuracy, eye validation is applied after initial eye detection, using a support vector machine based on appearance features obtained by principal component analysis (PCA) and linear discriminant analysis (LDA). Fourth, we propose a novel eye state-detection algorithm that combines appearance features obtained using PCA and LDA, with statistical features such as the sparseness and kurtosis of the histogram from the horizontal edge image of the eye. Experimental results showed that the detection accuracies of the eye region and eye states were 99 and 97%, respectively. Both driver drowsiness and distraction were detected with a success rate of 98%.
Supporting an unconstrained user interface is an important issue in iris recognition. Various methods try to remove the constraint of the iris being placed close to the camera, including portal-based and pan-tilt-zoom (PTZ)-based solutions. Generally speaking, a PTZ-based system has two cameras: one scene camera and one iris camera. The scene camera detects the eye's location and passes this information to the iris camera. The iris camera captures a high-resolution image of the person's iris. Existing PTZ-based systems are divided into separate types and parallel types, according to how the scene camera and iris camera combine. This paper proposes a novel PTZ-based iris recognition system, in which the iris camera and the scene camera are combined in a coaxial optical structure. The two cameras are placed together orthogonally and a cold mirror is inserted between them, such that the optical axes of the two cameras become coincident. Due to the coaxial optical structure, the proposed system does not need the optical axis displacement-related compensation required in parallel-type systems. Experimental results show that the coaxial type can acquire an iris image more quickly and accurately than a parallel type when the stand-off distance is between 1.0 and 1.5 m.
Using quantitative evaluation, we show that constrained least-square (CLS)-based image restoration extends depth of capture volume (DCV) significantly, and suggest that performance degradation characteristics of a feature extractor by defocus blurring should be considered while developing unconstrained iris recognition systems. When developing an unconstrained iris recognition system (assuming a relatively cooperative user) the extension of the DCV clearly stands out as one of the most important problems. Although it has already been reported that CSL-based image restoration can improve recognition performance by mitigating the effect of defocus blurring, until now there have been no reports as to how much the DCV could be extended. This work inspects the Hamming distance (HD) distribution and error rate while changing the strength of defocus blurring with iris images of CASIA-IrisV3-Interval. It is derived that the ratio of the maximum blurring parameters satisfying a certain acceptable error rate is the ratio of the DCV. Using this fact, experimental results show that CLS-based image restoration extends the DCV by more than 70%. Additionally, it is observed that a log-Gabor filter is superior to a Gabor filter as the feature extractor. It should be noted that although these two feature extractors show a relatively small performance gap when a focused image set is used, the gap becomes significantly larger as the blurring becomes more severe. This is thought to suggest an important principle that performance characteristics should be considered with respect to defocus blurring when selecting or developing the feature extractor of an unconstrained iris recognition system.
Although iris recognition is one of the most accurate biometric technologies, it has not yet been widely used in practical applications. This is mainly due to user inconvenience during the image acquisition phase. Specifically, users try to adjust their eye position within small capture volume at a close distance from the system. To overcome these problems, we propose a novel iris image acquisition system that provides users with unconstrained environments: a large operating range, enabling movement from standing posture, and capturing good-quality iris images in an acceptable time. The proposed system has the following three contributions compared with previous works: (1) the capture volume is significantly increased by using a pan-tilt-zoom (PTZ) camera guided by a light stripe projection, (2) the iris location in the large capture volume is found fast due to 1-D vertical face searching from the user's horizontal position obtained by the light stripe projection, and (3) zooming and focusing on the user's irises at a distance are accurate and fast using the estimated 3-D position of a face by the light stripe projection and the PTZ camera. Experimental results show that the proposed system can capture good-quality iris images in 2.479 s on average at a distance of 1.5 to 3 m, while allowing a limited amount of movement by the user.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.