KEYWORDS: Visualization, Sensors, Information visualization, Cameras, Video surveillance, Information fusion, Imaging systems, Image classification, Data fusion
Anomaly detection with visual information by distributed deep learning is proposed in the paper. First, visual anomalies are defined in a special application domain, which are very important and critical for safe operation. Secondly, deep convolutional neural network is chosen as detector for visual anomalies. Thirdly, detection results from different visual sources are fused to get higher accuracies and lower false alarm rate. Experimental results demonstrate that the visual anomaly detection framework proposed can achieve high performance and provide satisfactory security assurance.
As an excellent method for extracting distinctive invariant features from images, SIFT (scale-invariant feature transform) can effectively resist affine transformation such as translation and rotation of images, and theoretically has better resistance to illumination changes [1]. However, in practical applications the performance of SIFT is always affected by the contrast reduction caused by illumination changes. In this paper, the performance of SIFT under different contrasts is systematically analyzed and evaluated, and a reasonable explanation is given for the reason of SIFT performance change under different illumination conditions. And a SIFT fast matching method based on contrast compression is proposed.
Aiming at the vehicle target detection problem of multi-polarization SAR image under the terrain backgrounds, the
global CFAR detection by dual-polarized 16-bit data is proposed, which effectively reduces the influence of terrain
clutter on detection. In addition, through the analysis of terrain flatness and the screening of built-up areas, the target
detection area is reduced, and the interference of the complex terrain and the densely populated area to the target
detection is further reduced, and the reliability of the target detection is greatly improved.
In this paper, heterogeneous features extraction is conducted by deep learning for drug-related webpages classification. First, body text and image-label text are extracted through HTML parsing, and effective images are chosen by the FOCARSS algorithm. Second, text-based BOW model is used to generate text representation, and image-based BOW model is used to generate images representation. Webpages representation is generated by concatenating representations of text and images. Heterogeneous feature extraction are conducted by deep learning and classical methods, such as PCA, respectively. Feature selection is also conducted using information theory. Last, extracted features and selected features are classified. Experimental results demonstrate that the classification accuracy of features extracted by deep learning is higher than those of features extracted or selected by classical methods, and also higher than the accuracy of single modal classification.
In this paper, multi-modal local decision fusion is used for drug-related webpages classification. First, meaningful text are extracted through HTML parsing, and effective images are chosen by the FOCARSS algorithm. Second, six SVM classifiers are trained for six kinds of drug-taking instruments, which are represented by PHOG. One SVM classifier is trained for the cannabis, which is represented by the mid-feature of BOW model. For each instance in a webpage, seven SVMs give seven labels for its image, and other seven labels are given by searching the names of drug-taking instruments and cannabis in its related text. Concatenating seven labels of image and seven labels of text, the representation of those instances in webpages are generated. Last, Multi-Instance Learning is used to classify those drugrelated webpages. Experimental results demonstrate that the classification accuracy of multi-instance learning with multi-modal local decision fusion is much higher than those of single-modal classification.
Aiming at vehicle detection on the ground through low resolution SAR images, a method is proposed for determining the region of the vehicles first and then detecting the target in the specific region. The experimental results show that this method not only reduces the target detection area, but also reduces the influence of terrain clutter on the detection, which greatly improves the reliability of the target detection.
A new visual navigation method is proposed in the paper, which take advantage of natural landmarks and image local features. Images of natural landmarks are first collected and organized as database. SURF features of natural landmarks are also extracted and saved. In practice, real-time images are captured by camera, and their SURF features are also extracted. For each real-time image, its SURF features are used to match with corresponding ones of images of natural landmarks. According to match rules, it can be decided that which landmark the real-time image belongs to. Experimental results demonstrate that this method has high accuracy and strong robustness in complex environment.
In this paper, multi-kernel learning(MKL) is used for drug-related webpages classification. First, body text and image-label text are extracted through HTML parsing, and valid images are chosen by the FOCARSS algorithm. Second, text based BOW model is used to generate text representation, and image-based BOW model is used to generate images representation. Last, text and images representation are fused with a few methods. Experimental results demonstrate that the classification accuracy of MKL is higher than those of all other fusion methods in decision level and feature level, and much higher than the accuracy of single-modal classification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.