18 August 2016 Object tracking with double-dictionary appearance model
Li Lv, Tanghuai Fan, Zhen Sun, Jun Wang, Lizhong Xu
Author Affiliations +
Abstract
Dictionary learning has previously been applied to target tracking across images in video sequences. However, most trackers that use dictionary learning neglect to make optimal use of the representation coefficients to locate the target. This increases the possibility of losing the target in the presence of similar objects, or in case occlusion or rotation occurs. We propose an effective object-tracking method based on a double-dictionary appearance model under a particle filter framework. We employ a double dictionary by training template features to represent the target. This representation not only exploits the relationship between the candidate and target but also represents the target more accurately with minimal residual. We also introduce a simple and effective strategy to update the template to reduce the influence of occlusion, rotation, and drift. Experiments on challenging sequences showed that the proposed algorithm performs favorably against the state-of-the-art methods in terms of several comparative metrics.
© 2016 Society of Photo-Optical Instrumentation Engineers (SPIE) 0091-3286/2016/$25.00 © 2016 SPIE
Li Lv, Tanghuai Fan, Zhen Sun, Jun Wang, and Lizhong Xu "Object tracking with double-dictionary appearance model," Optical Engineering 55(8), 083106 (18 August 2016). https://doi.org/10.1117/1.OE.55.8.083106
Published: 18 August 2016
Lens.org Logo
CITATIONS
Cited by 9 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Detection and tracking algorithms

Associative arrays

Particle filters

Optical tracking

Video

Large Synoptic Survey Telescope

Optical engineering

RELATED CONTENT

Online maintaining appearance model using particle filter
Proceedings of SPIE (November 29 2007)

Back to Top