1 March 2009 Story-related caption detection and localization in news video
Chien-Cheng Lee, Cheng-Yuan Shih, Hao-Ming Huang
Author Affiliations +
Abstract
We propose a method to detect and localize story-related subject captions in news video. Most caption detection and localization algorithms attempt to detect as many captions as possible; however, a news picture may contain many types of captions that are unrelated to the story. To facilitate fast and accurate access to news video content, a method for detecting and localizing the story-related caption is necessary. This paper addresses two problems in texture-based caption detection and localization: the time-consuming computation of features, and the clutter of caption detection results. We address these problems by first identifying the subject caption region based on the frequency of text occurrence. Then, we detect the subject caption frame as it first appears onscreen. Finally, the texture-based caption localization procedure is performed on the subject caption region in subject captions' beginning frames. Using this method decreases the computation time significantly. Additionally, the unrelated types of text are also filtered out, and only the subject caption is detected and localized. Experimental results show that the proposed method can quickly and robustly detect subject captions from news video.
©(2009) Society of Photo-Optical Instrumentation Engineers (SPIE)
Chien-Cheng Lee, Cheng-Yuan Shih, and Hao-Ming Huang "Story-related caption detection and localization in news video," Optical Engineering 48(3), 037005 (1 March 2009). https://doi.org/10.1117/1.3103126
Published: 1 March 2009
Lens.org Logo
CITATIONS
Cited by 3 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Video

Optical engineering

Wavelets

Detection and tracking algorithms

Image processing

Library classification systems

Feature extraction

RELATED CONTENT


Back to Top