PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE
Proceedings Volume 6539, including the Title Page, Copyright
information, Table of Contents, Introduction, and the
Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Significant advances continue to be made in biometric technology. However, the global war on terrorism and our
increasingly electronic society have created the societal need for large-scale, interoperable biometric capabilities that
challenge the capabilities of current off-the-shelf technology. At the same time, there are concerns that large-scale
implementation of biometrics will infringe our civil liberties and offer increased opportunities for identity theft. This
paper looks beyond the basic science and engineering of biometric sensors and fundamental matching algorithms and
offers approaches for achieving greater performance and acceptability of applications enabled with currently available
biometric technologies. The discussion focuses on three primary biometric system aspects: performance and scalability,
interoperability, and cost benefit. Significant improvements in system performance and scalability can be achieved
through careful consideration of the following elements: biometric data quality, human factors, operational
environment, workflow, multibiometric fusion, and integrated performance modeling. Application interoperability
hinges upon some of the factors noted above as well as adherence to interface, data, and performance standards.
However, there are times when the price of conforming to such standards can be decreased local system performance.
The development of biometric performance-based cost benefit models can help determine realistic requirements and
acceptable designs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We address the problem of score level fusion of intramodal and multimodal experts in the context of biometric
identity verification. We investigate the merits of confidence based weighting of component experts. In contrast
to the conventional approach where confidence values are derived from scores, we use instead raw measures of
biometric data quality to control the influence of each expert on the final fused score. We show that quality based
fusion gives better performance than quality free fusion. The use of quality weighted scores as features in the
definition of the fusion functions leads to further improvements. We demonstrate that the achievable performance
gain is also affected by the choice of fusion architecture. The evaluation of the proposed methodology involves
6 face and one speech verification experts. It is carried out on the XM2VTS data base.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fingerprints are one of the most commonly used and relied-upon biometric technology. But often the captured
fingerprint image is far from ideal due to imperfect acquisition techniques that can be slow and cumbersome
to use without providing complete fingerprint information. Most of the diffculties arise due to the contact of
the fingerprint surface with the sensor platen. To overcome these diffculties we have been developing a noncontact
scanning system for acquiring a 3-D scan of a finger with suffciently high resolution which is then
converted into a 2-D rolled equivalent image. In this paper, we describe certain quantitative measures evaluating
scanner performance. Specifically, we use some image software components developed by the National Institute
of Standards and Technology, to derive our performance metrics. Out of the eleven identified metrics, three
were found to be most suitable for evaluating scanner performance. A comparison is also made between 2D
fingerprint images obtained by the traditional means and the 2D images obtained after unrolling the 3D scans
and the quality of the acquired scans is quantified using the metrics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A comparative study on multiple participants was undertaken to quantify the ability of a multispectral imaging
fingerprint sensor to perform reliable biometric matching in the presence of extreme sampling conditions. These extreme
conditions included finger wetness, dirt, chalk, acetone, bright ambient light, and low contact pressure during image
acquisition. The comparative study included three commercially available total internal reflectance sensors, run in
parallel with the multispectral imaging sensor and under identical sampling conditions. Performance assessments
showed that the multispectral imaging sensor was able to provide fingerprint images that produced good biometric
performance even under conditions in which the performance of the total internal reflectance sensors was severely
degraded. Additional analysis showed that the performance advantage of the multispectral images taken under these
conditions was maintained even when matched against enrollment images collected on total internal reflectance sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we propose a novel 3D face recognition system. Furthermore we propose and discuss the
development of a 3D reconstruction system designed specifically for the purpose of face recognition. The
reconstruction subsystem utilises a capture rig comprising of six cameras to obtain two independent stereo
pairs of the subject face during a structured light projection with the remaining two cameras obtaining texture
data under normal lighting conditions. Whilst the most common approaches to 3D reconstruction use least
square comparison of image intensity values, our system achieves dense point matching using Gabor
Wavelets as the primary correspondence measure. The matching process is aided by Voronoi segmentation
of the input images using strong confidence correlations as Voronoi seeds. Additional matches are then
propagated outwards from the initial seed matches to produce a dense point cloud and surface model. Within
the recognition subsystem models are first registered to a generic head model, and then an ICP variant is
applied between the recognition subject and each model in the comparison database, using the average
point-to-plane error as the recognition metric. Our system takes full advantage of the additional information
obtained from the shape and structure of the face, thus combating some of the inherent weaknesses of
traditional 2D methods such as pose and illumination variations. This novel reconstruction / recognition
process achieves 98.2% accuracy on databases containing in excess of 175 meshes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An automatic classification scheme of on-line handwritten signatures is presented. A Multilayer Perceptron
(MLP) with a hidden layer is used as classifier, and two different signature classes are considered, namely:
legible and non-legible name. Signatures are represented considering different feature subsets obtained from
global information. Mahalanobis distance is used to rank the parameters and feature selection is then applied
based on the top ranked features. Experimental results are given on the MCYT signature database comprising
330 signers. It is shown experimentally that automatic on-line signature classification based on the name legibility
is feasible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hand-based authentication is a key biometric technology with a wide range of potential applications both in
industry and government. Traditionally, hand-based authentication is performed by extracting information from
the whole hand. To account for hand and finger motion, guidance pegs are employed to fix the position and
orientation of the hand. In this paper, we consider a component-based approach to hand-based verification. Our
objective is to investigate the discrimination power of different parts of the hand in order to develop a simpler,
faster, and possibly more accurate and robust verification system. Specifically, we propose a new approach which
decomposes the hand in different regions, corresponding to the fingers and the back of the palm, and performs
verification using information from certain parts of the hand only. Our approach operates on 2D images acquired
by placing the hand on a flat lighting table. Using a part-based representation of the hand allows the system to
compensate for hand and finger motion without using any guidance pegs. To decompose the hand in different
regions, we use a robust methodology based on morphological operators which does not require detecting any
landmark points on the hand. To capture the geometry of the back of the palm and the fingers in suffcient
detail, we employ high-order Zernike moments which are computed using an effcient methodology. The proposed
approach has been evaluated on a database of 100 subjects with 10 images per subject, illustrating promising
performance. Comparisons with related approaches using the whole hand for verification illustrate the superiority
of the proposed approach. Moreover, qualitative comparisons with state-of-the-art approaches indicate that the
proposed approach has comparable or better performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper proposes an efficient indexing scheme for binary feature template using B+ tree. In this scheme the
input image is decomposed into approximation, vertical, horizontal and diagonal coefficients using the discrete
wavelet transform. The binarized approximation coefficient at second level is divided into four quadrants of equal
size and Hamming distance (HD) for each quadrant with respect to sample template of all ones is measured. This
HD value of each quadrant is used to generate upper and lower range values which are inserted into B+ tree.
The nodes of tree at first level contain the lower and upper range values generated from HD of first quadrant.
Similarly, lower and upper range values for the three quadrants are stored in the second, third and fourth level
respectively. Finally leaf node contains the set of identifiers. At the time of identification, the test image is
used to generate HD for four quadrants. Then the B+ tree is traversed based on the value of HD at every node
and terminates to leaf nodes with set of identifiers. The feature vector for each identifier is retrieved from the
particular bin of secondary memory and matched with test feature template to get top matches. The proposed
scheme is implemented on ear biometric database collected at IIT Kanpur. The system is giving an overall
accuracy of 95.8% at penetration rate of 34%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper investigates a new approach for human ear identification using holistic
grey-level information. We employ Log-Gabor wavelets to extract the phase
information, i.e. ear-codes, from the 1D gray-level signals. Thus each ear is
represented by a unique ear code or (phase template). The query ear images are
compared with those in the database using Hamming distance. The minimum
Hamming distance obtained from the rotation of ear template is used to authenticate
the user. Our experiments on two different public ear databases achieve promising
results and suggest its utility in ear-based authentication. This paper also illustrates
that the phase information extracted from ear images can achieve significant
performance improvement as compared to appearance-based approach employed in
the literature.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To compensate for the different orientations of two fingerprint images, matching systems use a reference point and a set
of transformation parameters. Fingerprint minutiae are compared on their positions relative to the reference points, using
a set of thresholds for the various matching features. However a pair of minutiae might have similar values for some of
the features compensated by dissimilar values for others; this tradeoff cannot be modeled by arbitrary thresholds, and
might lead to a number of false matches. Instead given a list of potential correspondences of minutiae points, we could
use a static classifier, such as a support vector machine (SVM) to eliminate some of the false matches. A 2-class model is
built using sets of minutiae correspondences from fingerprint pairs known to belong to the same and different users. For
a test pair of fingerprints, a similar set of minutiae correspondences is extracted and given to the recognizer, using only
those classified as genuine matches to calculate the similarity score, and thus, the matching result. We have built
recognizers using different combinations of fingerprint features and have tested them against the FVC 2002 database.
Using this recognizer reduces the number of false minutiae matches by 19%, while only 5% of the minutiae pairs
corresponding to fingerprints of the same user are rejected. We study the effect of such a reduction on the final error rate,
using different scoring schemes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Given a query fingerprint, the goal of indexing is to identify and retrieve a set of candidate fingerprints from a
large database in order to determine a possible match. This significantly improves the response time of fingerprint
recognition systems operating in the identification mode. In this work, we extend the indexing framework based
on minutiae triplets by utilizing ridge curve parameters in conjunction with minutiae information to enhance
indexing performance. Further, we demonstrate that the proposed technique facilitates the indexing of fingerprint
images acquired using different sensors. Experiments on the publicly available FVC database confirm the utility
of the proposed approach in indexing fingerprints.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Matching of partial fingerprints has important applications in both biometrics and forensics. It is well-known that
the accuracy of minutiae-based matching algorithms dramatically decrease as the number of available minutiae
decreases. When singular structures such as core and delta are unavailable, general ridges can be utilized. Some
existing highly accurate minutiae matchers do use local ridge similarity for fingerprint alignment. However, ridges
cover relatively larger regions, and therefore ridge similarity models are sensitive to non-linear deformation. An
algorithm is proposed here to utilize ridges more effectively- by utilizing representative ridge points. These points
are represented similar to minutiae and used together with minutiae in existing minutiae matchers with simple
modification. Algorithm effectiveness is demonstrated using both full and partial fingerprints. The performance
is compared against two minutiae-only matchers (Bozorth and k-minutiae). Effectiveness with full fingerprint
matching is demonstrated using the four databases of FVC2002- where the error rate decreases by 0.2-0.7% using
the best matching algorithm. The effectiveness is more significant in the case of partial fingerprint matching-
which is demonstrated with sixty partial fingerprint databases generated from FVC2002 (with five levels of numbers
of minutiae available). When only 15 minutiae are available the error rate decreases 5-7.5%. Thus the method,
which involves selecting representative ridge points, minutiae matcher modification, and a group of minutiae
matchers, demonstrates improved performance on full and especially partial fingerprint matching.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The increasing use of biometrics in different environments presents
new challenges. Most importantly, biometric data are irreplaceable.
Therefore, storing biometric templates, which is unique to
individual user, entails significant security risks. In this paper,
we propose a geometric transformation for securing the minutiae
based fingerprint templates. The proposed scheme employs a robust
one-way transformation that maps geometrical configuration of the
minutiae points into a fixed-length code vector. This representation
enables efficient alignment and reliable matching. Experiments are
conducted by applying the proposed method on a synthetically
generated minutiae point sets. Preliminary results show that the
proposed scheme provides a simple and effective solution to the
template security problem of the minutiae based fingerprint.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Reliable person recognition is important for secure access and commercial applications requiring human identification.
Face recognition (FR) is an important technology being developed for human identification. Algorithms and systems for
large population face recognition (LPFR) are of significant interest in applications such as watch lists and video
surveillance. In this paper, we present correlation filter-based feature analysis methods that effectively exploit available
generic training data to represent a large number of subjects and thus improve the performance for LPFR. We first
introduce a general framework - class-dependence feature analysis (CFA), which uses correlation filters to provide a
discriminant feature representation for LPFR. We then introduce two variants of the correlation filter-based CFA
methods: 1) the kernel correlation filter CFA (KCFA) that generates nonlinear decision boundaries and significantly
improves the recognition performance without greatly increasing the computational load, and 2) the binary coding CFA
that uses binary coding to reduce the number of correlation filters and applies error control coding (ECC) to improve the
recognition performance. These two variants offer ways to tradeoff between the computational complexity and the
recognition accuracy of the CFA methods. We test our proposed algorithms on the face recognition grand challenge
(FRGC) database and show that the correlation filter-based CFA approach improves the recognition rate and reduces the
computational load over the conventional correlation filters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most existing face recognition algorithms require face images with a minimum resolution. Meanwhile, the rapidly
emerging need for near-ground long range surveillance calls for a migration in face recognition from close-up distances
to long distances and accordingly from low and constant resolution to high and adjustable resolution. With limited
optical zoom capability restricted by the system hardware configuration, super-resolution (SR) provides a promising
solution with no additional hardware requirements. In this paper, a brief review of existing SR algorithms is conducted
and their capability of improving face recognition rates (FRR) for long range face images is studied. Algorithms
applicable to real-time scenarios are implemented and their performances in terms of FRR are examined using the IRISLRHM
face database [1]. Our experimental results show that SR followed by appropriate enhancement, such as wavelet
based processing, is able to achieve comparable FRR when equivalent optical zoom is employed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a novel method for performing robust super-resolution of face images by solving the practical
problems of the traditional manifold analysis. Face super-resolution is to recover a high-resolution face image from a
given low-resolution face image by modeling the face image space in view of multiple resolutions. In particular, face
super-resolution is useful to enhance face images captured from surveillance footage. Face super-resolution should be
preceded by analyzing the characteristics of the face image distribution. In literature, it has been shown that face images
lie on a nonlinear manifold by various manifold learning algorithms, so if the manifold structure is taken into
consideration for modeling the face image space, the results of face super-resolution can be improved. However, there
are some practical problems which prevent the manifold analysis from being applied to super-resolution. Almost all of
the manifold learning methods cannot generate mapping functions for new test images which are absent from a training
set. Also, there exists another significant problem when applying the manifold analysis to super-resolution; superresolution
seeks to recover a high-dimensional image from a low-dimensional one while manifold learning methods
perform the exact opposite for dimensionality reduction.
To break those limitations of applying the manifold analysis to super-resolution, we propose a novel face superresolution
method using Locality Preserving Projections (LPP). LPP gives an advantage over other manifold learning
methods in that it has well-defined linear projections which allow us to formulate well-defined mappings between highdimensional
data and low-dimensional data. Moreover, we show that LPP coefficients of an unknown high-resolution
image can be inferred from a given low-resolution image using a MAP estimator.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we demonstrate the subspace generalization power of the kernel correlation feature analysis (KCFA)
method for extracting a low dimensional subspace that has the ability to represent new unseen datasets. Examining the
portability of this algorithm across different datasets is an important practical aspect of real-world face recognition
applications where the technology cannot be dataset-dependant. In most face recognition literature, algorithms are
demonstrated on datasets by training on one portion of the dataset and testing on the remainder. Generally, the testing
subjects' dataset partially or totally overlap the training subjects' dataset however with disjoint images captured from
different sessions. Thus, some of the expected facial variations and the people's faces are modeled in the training set. In
this paper we describe how we efficiently build a compact feature subspace using kernel correlation filter analysis on the
generic training set of the FRGC dataset and use that basis for recognition on a different dataset. The KCFA feature
subspace has a total dimension that corresponds to the number of training subjects; we chose to vary this number to
include up to all of 222 available in the FRGC generic dataset. We test the built subspace produced by KCFA by
projecting other well-known face datasets upon it. We show that this feature subspace has good representation and
discrimination to unseen datasets and produces good verification and identification rates compared to other subspace and
dimensionality reduction methods such as PCA (when trained on the same FRGC generic dataset). Its efficiency, lower
dimensionality and discriminative power make it more practical and powerful than PCA as a robust lower
dimensionality reduction method for modeling faces and facial variations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic face recognition (AFR) is a challenging task that is increasingly becoming the preferred biometric
trait for identification and has the potential of becoming an essential tool in the fight against crime and terrorism.
Closed-circuit television (CCTV) cameras have increasingly been used over the last few years for surveillance in
public places such as airports, train stations and shopping centers. They are used to detect and prevent crime,
shoplifting, public disorder and terrorism. The work of law-enforcing and intelligence agencies is becoming more
reliant on the use of databases of biometric data for large section of the population. Face is one of the most natural
biometric traits that can be used for identification and surveillance. However, variations in lighting conditions,
facial expressions, face size and pose are a great obstacle to AFR. This paper is concerned with using waveletbased
face recognition schemes in the presence of variations of expressions and illumination. In particular, we
will investigate the use of a combination of wavelet frequency channels for a multi-stream face recognition using
various wavelet subbands as different face signal streams. The proposed schemes extend our recently developed
face veri.cation scheme for implementation on mobile devices. We shall present experimental results on the
performance of our proposed schemes for a number of face databases including a new AV database recorded on
a PDA. By analyzing the various experimental data, we shall demonstrate that the multi-stream approach is
robust against variations in illumination and facial expressions than the previous single-stream approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a fully automatic real-time face recognition system from video by using Active Appearance Models
(AAM) for fitting and tracking facial fiducial landmarks and warping the non-frontal faces into a frontal pose. By
implementing a face detector for locating suitable initialization step of the AAM shape searching and fitting process,
new facial images are interpreted and tracked accurately in real time (15fps). Using an Active Appearance Model
(AAM) for normalizing facial images under different poses and expressions is crucial to providing improved face
recognition performance as most systems degrade matching performance with even smallest pose variation.
Furthermore the AAM is a more robust feature registration tracking approach as most systems detect and locate the eyes
while AAMs detect and track multiple fiducial points on the face holistically. We show examples of AAM fitting and
tracking and pose normalization including an illumination pre-processing step to remove specular and cast shadow
illumination artifacts on the face. We show example pose normalization images as well as example matching scores
showing the improved performance of this pose correction method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As biometric recognition systems are widely applied in various application areas, security and privacy risks have
recently attracted the attention of the biometric community. Template protection techniques prevent stored
reference data from revealing private biometric information and enhance the security of biometrics systems
against attacks such as identity theft and cross matching. This paper concentrates on a template protection
algorithm that merges methods from cryptography, error correction coding and biometrics. The key component
of the algorithm is to convert biometric templates into binary vectors. It is shown that the binary vectors should
be robust, uniformly distributed, statistically independent and collision-free so that authentication performance
can be optimized and information leakage can be avoided. Depending on statistical character of the biometric
template, different approaches for transforming biometric templates into compact binary vectors are presented.
The proposed methods are integrated into a 3D face recognition system and tested on the 3D facial images of
the FRGC database. It is shown that the resulting binary vectors provide an authentication performance that
is similar to the original 3D face templates. A high security level is achieved with reasonable false acceptance
and false rejection rates of the system, based on an efficient statistical analysis. The algorithm estimates the
statistical character of biometric templates from a number of biometric samples in the enrollment database.
For the FRGC 3D face database, the small distinction of robustness and discriminative power between the
classification results under the assumption of uniquely distributed templates and the ones under the assumption
of Gaussian distributed templates is shown in our tests.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We define the biometric performance invariance under strictly monotonic functions on match scores as normalization
symmetry. We use this symmetry to clarify the essential difference between the standard score-level fusion
approaches of sum rule and Neyman-Pearson. We then express Neyman-Pearson fusion assuming match scores
defined using false acceptance rates on a logarithmic scale. We show that by stating Neyman-Pearson in this
form, it reduces to sum rule fusion for ROC curves with logarithmic slope. We also introduce a one parameter
model of biometric performance and use it to express Neyman-Pearson fusion as a weighted sum rule.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A nonparametric inferential statistical data analysis is presented. The utility of this method is
demonstrated through analyzing results from minutiae exchange with two-finger fusion. The
analysis focused on high-accuracy vendors and two modes of matching standard fingerprint templates: 1) Native Matching - where the same vendor generates the templates and the matcher,
and 2) Scenario 1 Interoperability - where vendor A's enrollment template is matched to vendor B's authentication template using vendor B's matcher. The purpose of this analysis is to make inferences about the underlying population from sample data, which provide insights at an
aggregate level. This is very different from the data analysis presented in the MINEX04 report
in which vendors are individually ranked and compared. Using the nonparametric bootstrap
bias-corrected and accelerated (BCa) method, 95 % confidence intervals are computed for each
mean error rate. Nonparametric significance tests are then applied to further determine if the
difference between two underlying populations is real or by chance with a certain probability. Results from this method show that at a greater-than-95 % confidence level there is a significant degradation in accuracy of Scenario 1 Interoperability with respect to Native Matching. The difference of error rates can reach on average a two-fold increase in False Non-Match Rate. Additionally, it is proved why two-finger fusion using the sum rule is more accurate than single-finger
matching under the same conditions. Results of a simulation are also presented to show the significance of the confidence intervals derived from the small size of samples, such as six error rates in some of our cases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Facial recognition is fast becoming one of the more popular and effective modalities of biometrics when used in
controlled environments. Controlled environments are those in which factors such as facial expression, pose, camera
position, and in particular illumination effects are controlled to some degree with respect to better performance.
Regulation or normalization of such factors has effects on all facial recognition algorithms, and the factor of illumination
effects is one of significant importance. In this paper we describe a method to address illumination effects in the
biometric modality of face recognition using Empirical Mode Decomposition (EMD) to identify illumination modes that
compose the image. Following identification of intrinsic mode functions that correspond to the dominant illumination
factors, we reconstruct the facial image minus these negative factors to synthesize a more neutral facial image. We then
perform recognition and verification experiments using different algorithms such as Principal Component Analysis
(PCA), Fisher Linear Discriminant Analysis (FLDA), and Correlation Filters (CF's) to demonstrate the fundamental
effectiveness of EMD as an illumination compensation method. Results are reported on the Carnegie Mellon University
Pose-Illumination-Expression (CMU PIE) Database and the Yale Face Database B.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Quality of a biometric system is directly related to the performance of the dissimilarity measure function. Frequently a
generalized dissimilarity measure function such as Mahalanobis distance is applied to the task of matching biometric
feature vectors. However, often accuracy of a biometric system can be greatly improved by introducing a customized
matching algorithm optimized for a particular biometric. In this paper we investigate two tailored similarity measure
functions for behavioral biometric systems based on the expert knowledge of the data in the domain. We compare
performance of proposed matching algorithms to that of other well known similarity distance functions and demonstrate
superiority of one of the new algorithms with respect to the chosen domain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Gaze estimation is an important component of computer vision systems that monitor human activity for
surveillance, human-computer interaction, and various other applications including iris recognition. Gaze
estimation methods are particularly valuable when they are non-intrusive, do not require calibration, and
generalize well across users. This paper presents a novel eye model that is employed for efficiently
performing uncalibrated eye gaze estimation. The proposed eye model was constructed from a geometric
simplification of the eye and anthropometric data about eye feature sizes in order to circumvent the
requirement of calibration procedures for each individual user. The positions of the two eye corners and
the midpupil, the distance between the two eye corners, and the radius of the eye sphere are required for
gaze angle calculation. The locations of the eye corners and midpupil are estimated via processing
following eye detection, and the remaining parameters are obtained from anthropometric data. This eye
model is easily extended to estimating eye gaze under variable head pose. The eye model was tested on
still images of subjects at frontal pose (0o) and side pose (34o). An upper bound of the model's
performance was obtained by manually selecting the eye feature locations. The resulting average absolute
error was 2.98o for frontal pose and 2.87o for side pose. The error was consistent across subjects, which
indicates that good generalization was obtained. This level of performance compares well with other gaze
estimation systems that utilize a calibration procedure to measure eye features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.