We present a drift-correcting template update strategy for precisely tracking a feature point in 2D image sequences in
this paper. The proposed strategy greatly extends Matthews et al's template tracking strategy [I. Matthews, T. Ishikawa
and S. Baker, The template update problem, IEEE Trans. PAMI 26 (2004) 810-815.] by incorporating a robust non-rigid
image registration step used in medical imaging. Matthews et al's strategy uses the first template to correct drifts in the
current template; however, the drift would still build up if the first template becomes quite different from the current one
as the tracking continues. In our strategy the first template is updated timely when it is quite different from the current
one, and henceforth the updated first template can be used to correct template drifts in subsequent frames. The method
based on the proposed strategy yields sub-pixel accuracy tracking results measured by the commercial software
REALVIZ(R) MatchMover(R) Pro 4.0. Our method runs fast on a desktop PC (3.0 GHz Pentium(R) IV CPU, 1GB RAM,
Windows(R) XP professional operating system, Microsoft Visual C++ 6.0 (R) programming), using about 0.03 seconds on
average to track the feature point in a frame (under the assumption of a general affine transformation model, 61×61
pixels in template size) and when required, less than 0.1 seconds to update the first template. We also propose the
architecture for implementing our strategy in parallel.
In a previous paper (Ref. 9) we presented a feature-based nonrigid image registration method using a Hausdorff distance
based matching measure. One limitation of the method is that it is likely to fail in "ambiguous" cases where a part of the
features in the source image are nearer to a prominent number of non-corresponding features in the target image than to
their corresponding ones. To partly alleviate this limitation, in this paper we propose a new feature-based nonrigid image
method that uses multi-class-Hausdorff-fractions-based similarity matching measure. We first divide features into a finite
number of classes, then we calculate a similarity matching measure by adding up the forward and backward multi-class
Hausdorff fractions of the classes. The new similarity matching measure outperforms that used in our previous work,
given that the features in the images to be registered can be correctly classified. We also adapted the optimization
procedure of our previous method so that it can work appropriately with the new similarity matching measure. The new
method, introducing only a small computational load, is capable of reducing undesired matching of features that are
adjacent to each other but belong to different classes.
A feature-based, nonrigid image registration method using a Hausdorff distance-based matching measure is presented. The proposed method is robust to outliers and missing features, as no correspondence is established between the features. Utilizing a B-spline-based, nonrigid deformation model, the proposed method is able to handle nonrigid deformations between the images to be registered. A gradient descent-based optimization procedure is developed to maximize the matching measure under the nonrigid transformation model. With adaptively adjustable step sizes, the optimization procedure works in a coarse-to-fine manner so that first large nonrigid deformations are compensated and then the transformation parameters are gradually refined. In addition, two acceleration techniques are devised to greatly speed up the registration method, making it more practical for real applications. The performance of the proposed method is validated in various experiments from synthetic image registration to hand-drawn Chinese character registration and brain outline registration. The limitation of the method is also analyzed and exemplified. To partly alleviate the limitation of the method, we incorporate landmark information into the method and achieve promising results.
Corner point is one of the most important feature points in computer vision and pattern recognition. In this paper we introduce a new boundary-based corner detection method using wavelet transform for its ability for detecting sharp variations. Our idea is derived from Jiann-Shu Lee et al.'s algorithm, but, unlike them, we represent the orientation profile in an almost continuous way. Theoretical analysis and experimental results show that our method is effective in detecting corners.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.