KEYWORDS: Data modeling, Image analysis, Visual process modeling, Education and training, Cameras, Performance modeling, Mixtures, Ablation, Visualization, Systems modeling
A topic model is a probabilistic method for data analysis and characterization that provides insight into the topics that comprise each document in a corpus, where each topic is described by an associated word distribution. A dynamic topic model is an extension of this model that can be applied to time series data. These models have typically been applied to the text domain where the concepts of tokens and words are well defined. Applying these models to the image domain is non-obvious because the concepts of tokens and words need to hand-crafted. In this work, we apply the dynamic topic model to a sequence of images to provide insight into their dynamic nature, e.g., by helping to identify interesting locations in time that correspond to change in operating conditions We apply this model to images from the KITTI dataset and show that the model captures the evolving nature of these topics over time.
SeeCoast is a prototype US Coast Guard port and coastal area surveillance system that aims to reduce operator workload while maintaining optimal domain awareness by shifting their focus from having to detect events to being able to analyze and act upon the knowledge derived from automatically detected anomalous activities. The automated scene understanding capability provided by the baseline SeeCoast system (as currently installed at the Joint Harbor Operations Center at Hampton Roads, VA) results from the integration of several components. Machine vision technology processes the real-time video streams provided by USCG cameras to generate vessel track and classification (based on vessel length) information. A multi-INT fusion component generates a single, coherent track picture by combining information available from the video processor with that from surface surveillance radars and AIS reports. Based on this track picture, vessel activity is analyzed by SeeCoast to detect user-defined unsafe, illegal, and threatening vessel activities using a rule-based pattern recognizer and to detect anomalous vessel activities on the basis of automatically learned behavior normalcy models. Operators can optionally guide the learning system in the form of examples and counter-examples of activities of interest, and refine the performance of the learning system by confirming alerts or indicating examples of false alarms. The fused track picture also provides a basis for automated control and tasking of cameras to detect vessels in motion. Real-time visualization combining the products of all SeeCoast components in a common operating picture is provided by a thin web-based client.
Michael Seibert, Bradley Rhodes, Neil Bomberger, Patricia Beane, Jason Sroka, Wendy Kogel, William Kreamer, Chris Stauffer, Linda Kirschner, Edmond Chalom, Michael Bosse, Robert Tillson
SeeCoast extends the US Coast Guard Port Security and Monitoring system by adding capabilities to detect, classify, and
track vessels using electro-optic and infrared cameras, and also uses learned normalcy models of vessel activities in
order to generate alert cues for the watch-standers when anomalous behaviors occur. SeeCoast fuses the video data with
radar detections and Automatic Identification System (AIS) transponder data in order to generate composite fused tracks
for vessels approaching the port, as well as for vessels already in the port. Then, SeeCoast applies rule-based and
learning-based pattern recognition algorithms to alert the watch-standers to unsafe, illegal, threatening, and other
anomalous vessel activities. The prototype SeeCoast system has been deployed to Coast Guard sites in Virginia. This
paper provides an overview of the system and outlines the lessons learned to date in applying data fusion and automated
pattern recognition technology to the port security domain.
We have continued development of a system for multisensor image fusion and interactive mining based on neural models of color vision processing, learning and pattern recognition. We pioneered this work while at MIT Lincoln Laboratory, initially for color fused night vision (low-light visible and uncooled thermal imagery) and later extended it to multispectral IR and 3D ladar. We also developed a proof-of-concept system for EO, IR, SAR fusion and mining. Over the last year we have generalized this approach and developed a user-friendly system integrated into a COTS exploitation environment known as ERDAS Imagine. In this paper, we will summarize the approach and the neural networks used, and demonstrate fusion and interactive mining (i.e., target learning and search) of low-light Visible/SWIR/MWIR/LWIR night imagery, and IKONOS multispectral and high-resolution panchromatic imagery. In addition, we will demonstrate how target learning and search can be enabled over extended operating conditions by allowing training over multiple scenes. This will be illustrated for the detection of small boats in coastal waters using fused Visible/MWIR/LWIR imagery.
This paper presents a novel approach to higher-level (2+) information fusion and knowledge representation using
semantic networks composed of coupled spiking neuron nodes. Networks of spiking neurons have been shown to
exhibit synchronization, in which sub-assemblies of nodes become phase locked to one another. This phase locking
reflects the tendency of biological neural systems to produce synchronized neural assemblies, which have been
hypothesized to be involved in feature binding. The approach in this paper embeds spiking neurons in a semantic
network, in which a synchronized sub-assembly of nodes represents a hypothesis about a situation. Likewise, multiple
synchronized assemblies that are out-of-phase with one another represent multiple hypotheses. The initial network is
hand-coded, but additional semantic relationships can be established by associative learning mechanisms. This
approach is demonstrated with a simulated scenario involving the tracking of suspected criminal vehicles between
meeting places in an urban environment.
We have continued development of a system for multisensor image fusion and interactive mining based on neural models of color vision processing, learning and pattern recognition. We pioneered this work while at MIT Lincoln Laboratory, initially for color fused night vision (low-light visible and uncooled thermal imagery) and later extended it to multispectral IR and 3D ladar. We also developed a proof-of-concept system for EO, IR, SAR fusion and mining. Over the last year we have generalized this approach and developed a user-friendly system integrated into a COTS exploitation environment known as ERDAS Imagine. In this paper, we will summarize the approach and the neural networks used, and demonstrate fusion and interactive mining (i.e., target learning and search) of low-light visible/SWIR/MWIR/LWIR night imagery, and IKONOS multispectral and high-resolution panchromatic imagery. In addition, we will demonstrate how target learning and search can be enabled over extended operating conditions by allowing training over multiple scenes. This will be illustrated for the detection of small boats in coastal waters using fused visible/MWIR/LWIR imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.