The US increasingly relies on surveillance video to determine when activities of interest occur in a surveilled location. The growth in video volume places a difficult burden on the analyst workforce charged with evaluating streaming video or performing forensic analysis on archived video. This paper presents a video summarization pipeline that attempts to reduce the volume of video analysts must watch by summarizing the video into shorter, presumably important clips. The pipeline incorporates object recognition and tracking to generate clips composed of bounding boxes for objects across time, segments these clips into unique trajectories, trains a stacked sparse autoencoder, then generates a summary based on reconstruction error within the autoencoder, where high error indicates a unique (relative to previous) object trajectory. The paper then compares performance of the summarization pipeline applied to research datasets to performance on more realistic DoD surveillance datasets.
|