Paper
28 May 2013 Fusing video and text data by integrating appearance and behavior similarity
Georgiy Levchuk, Charlotte Shabarekh
Author Affiliations +
Abstract
In this paper, we describe an algorithm for multi-modal entity co-reference resolution and present experimental results using text and motion imagery data sources. Our model generates probabilistic association between entities mentioned in text and detected in video data by jointly optimizing the measure of appearance and behavior similarity. Appearance similarity is calculated as a match between propositionderived entity attributes mentioned in text, and the object appearance classification from video sources. The behavior similarity is calculated based on the semantic information about entity movements, actions, and interactions with other entities mentioned in text and detected in video sources. Our model achieved 79% Fscore for text-to-video entity co-reference resolution; we show that entity interactions present unique features for resolving variability present in text data and ambiguity of visual appearance of entities.
© (2013) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Georgiy Levchuk and Charlotte Shabarekh "Fusing video and text data by integrating appearance and behavior similarity", Proc. SPIE 8751, Machine Intelligence and Bio-inspired Computation: Theory and Applications VII, 875107 (28 May 2013); https://doi.org/10.1117/12.2014878
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Video

Data modeling

Semantic video

Video surveillance

Visualization

Feature extraction

Image resolution

RELATED CONTENT


Back to Top