1 December 2011 Modeling loosely annotated images using both given and imagined annotations
Hong Tang, Nozha Boujemaa, Yunhao Chen, Lei Deng
Author Affiliations +
Abstract
In this paper, we present an approach to learn latent semantic analysis models from loosely annotated images for automatic image annotation and indexing. The given annotation in training images is loose due to: 1. ambiguous correspondences between visual features and annotated keywords; 2. incomplete lists of annotated keywords. The second reason motivates us to enrich the incomplete annotation in a simple way before learning a topic model. In particular, some "imagined" keywords are poured into the incomplete annotation through measuring similarity between keywords in terms of their co-occurrence. Then, both given and imagined annotations are employed to learn probabilistic topic models for automatically annotating new images. We conduct experiments on two image databases (i.e., Corel and ESP) coupled with their loose annotations, and compare the proposed method with state-of-the-art discrete annotation methods. The proposed method improves word-driven probability latent semantic analysis (PLSA-words) up to a comparable performance with the best discrete annotation method, while a merit of PLSA-words is still kept, i.e., a wider semantic range.
©(2011) Society of Photo-Optical Instrumentation Engineers (SPIE)
Hong Tang, Nozha Boujemaa, Yunhao Chen, and Lei Deng "Modeling loosely annotated images using both given and imagined annotations," Optical Engineering 50(12), 127004 (1 December 2011). https://doi.org/10.1117/1.3660575
Published: 1 December 2011
Lens.org Logo
CITATIONS
Cited by 2 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Visualization

Image processing

Visual process modeling

Image segmentation

Optical engineering

RGB color model

Data modeling

Back to Top