We present a polarized dual Single Pixel Camera (SPC) operating in the Short Wave Infrared (SWIR) spectral range that uses a total variation based reconstruction method to reconstruct polarized images from an ensemble of compressed measurements. Walsh-Hadamard matrices are used for generating pseudo-random measurements which speed up the reconstruction and enable reconstruction of high resolution images. The system combines a Digital Micromirror Device (DMD), two nearly identical InGaAs photodiodes and two polarization filters. Roughly half of the DMD-mirrors are oriented toward the first photodiode and the complementary DMD-mirrors are oriented toward the second photodiode. Total variation based reconstruction strategies have been implemented, and evaluated on both simulated compressed measurements, and real outdoor scenes using the developed system.
Raman spectroscopy is an efficient method for detection of explosives even in small quantities. A laser can be combined with a Coded Aperture Snapshot Spectral Imaging (CASSI) system to collect Raman spectra from a surface at stand-off distances. The CASSI-system decrease the data collection time but instead increase the reconstruction time for the Raman image. Reconstruction of Raman spectra from an ensemble of compressed sensing measurements using standard reconstruction methods such as Total Variation (TV) is rather time consuming and limits the application domain for the technique. Novel machine learning approaches such as Deep Learning (DL) has lately been applied to reconstruction problems. We evaluate our earlier developed DL approach for reconstruction of Raman spectra from an ensemble of measurements formulated as a regression problem. The DL network is trained by minimizing a loss-function which is composed of two components: a reconstruction error and a re-projection error. The evaluated method is trained on simulated data where the training data has been generated using a transfer function. The transfer function has been developed to mimic the optical properties of a CASSI system. The DL network has been trained on different training sets with different levels of background noise, different number of materials in the scene and different spatial configurations of the materials in the scene. The reconstruction results using the DL network has been qualitatively evaluated on simulated data and the results are also compared to the Two-Step Iterative Shrinkage/Thresholding (TwIST) algorithm in terms of reconstruction quality and computation time. The reconstruction time for the DL are orders of magnitude lower than for TwIST without reducing the quality of the reconstructed Raman spectra.
The current development of increasingly sensitive low-light detector technologies in the VNIR/SWIR regions shows many promises for future night vision applications, including digital image fusion. By combining spectral bands from the reflective and the thermally emissive domains, providing complementary band-specific cues and advantages, it is anticipated that a fused representation will increase situational awareness and target discrimination performance. Performance assessment of image fusion still remains an open problem however, as suitable procedures, models and image quality metrics are still largely missing. A night-time data collection was made on a side-aspect two-hand object identification task over several ranges in a rural/woodland area using a common line-of-sight VNIR/LWIR system. Perception experiments based on an 8-alternative forced choice (8AFC) object ID task were performed, on both the two individual bands as well as several common pixel-based fusion algorithms (including maximum, subtraction and averaging). As image fusion is highly task and scene dependent it is difficult to draw any general conclusions from a single experiment, but for the particular task/scene combination investigated most of the fusion algorithms are shown to perform better than the VNIR channel, albeit most of them fail to perform as well as the LWIR. This is thought to be the result of the VNIR channel being contrast-limited for the particular task/scene being studied and the low dynamic range of the low-light EBCMOS camera used in the fusion setup.
The use of Improvised Explosive Devices (IEDs) has increased significantly over the world and is a globally widespread phenomenon. Although measures can be taken to anticipate and prevent the opponent's ability to deploy IEDs, detection of IEDs will always be a central activity. There is a wide range of different sensors that are useful but also simple means, such as a pair of binoculars, can be crucial to detect IEDs in time.
Disturbed earth (disturbed soil), such as freshly dug areas, dumps of clay on top of smooth sand or depressions in the ground, could be an indication of a buried IED. This paper brie y describes how a field trial was set-up to provide a realistic data set on a road section containing areas with disturbed soil due to buried IEDs. The road section was imaged using a forward looking land-based sensor platform consisting of visual imaging sensors together with long-, mid-, and shortwave infrared imaging sensors.
The paper investigates the presence of discriminatory information in surface texture comparing areas with disturbed against undisturbed soil. The investigation is conducted for the different wavelength bands available. To extract features that describe texture, image processing tools such as 'Histogram of Oriented Gradients', 'Local Binary Patterns', 'Lacunarity', 'Gabor Filtering' and 'Co-Occurence' is used. It is found that texture as characterized here may provide discriminatory information to detect disturbed soil, but the signatures we found are weak and can not be used alone in e.g. a detector system.
We propose a novel Deep learning approach using autoencoders to map spectral bands to a space of lower dimensionality while preserving the information that makes it possible to discriminate different materials. Deep learning is a relatively new pattern recognition approach which has given promising result in many applications. In Deep learning a hierarchical representation of increasing level of abstraction of the features is learned. Autoencoder is an important unsupervised technique frequently used in Deep learning for extracting important properties of the data. The learned latent representation is a non-linear mapping of the original data which potentially preserve the discrimination capacity.
We present algorithm evaluations for ATR of small sea vessels. The targets are at km distance from the sensors, which
means that the algorithms have to deal with images affected by turbulence and mirage phenomena. We evaluate
previously developed algorithms for registration of 3D-generating laser radar data. The evaluations indicate that some
robustness to turbulence and mirage induced uncertainties can be handled by our probabilistic-based registration method.
We also assess methods for target classification and target recognition on these new 3D data.
An algorithm for detecting moving vessels in infrared image sequences is presented; it is based on optical flow
estimation. Detection of moving target with an unknown spectral signature in a maritime environment is a challenging
problem due to camera motion, background clutter, turbulence and the presence of mirage. First, the optical flow caused
by the camera motion is eliminated by estimating the global flow in the image. Second, connected regions containing
significant motions that differ from camera motion is extracted. It is assumed that motion caused by a moving vessel is
more temporally stable than motion caused by mirage or turbulence. Furthermore, it is assumed that the motion caused
by the vessel is more homogenous with respect to both magnitude and orientation, than motion caused by mirage and
turbulence. Sufficiently large connected regions with a flow of acceptable magnitude and orientation are considered
target regions. The method is evaluated on newly collected sequences of SWIR and MWIR images, with varying targets,
target ranges and background clutter.
Finally we discuss a concept for combining passive and active imaging in an ATR process. The main steps are passive
imaging for target detection, active imaging for target/background segmentation and a fusion of passive and active
imaging for target recognition.
This paper briefly describes a field trial designed to give a realistic data set on a road section containing areas with
disturbed soil due to buried IEDs. During a time-span of a couple of weeks, the road was repeatedly imaged using a
multi-band sensor system with spectral coverage from visual to LWIR. The field trial was conducted to support a long
term research initiative aiming at using EO sensors and sensor fusion to detect areas of disturbed soil.
Samples from the collected data set is presented in the paper and shown together with an investigation on basic statistical
properties of the data. We conclude that upon visual inspection, it is fully possible to discover areas that have been
disturbed, either by using visual and/or IR sensors. Reviewing the statistical analysis made, we also conclude that
samples taken from both disturbed and undisturbed soil have well definable statistical distributions for all spectral bands.
We explore statistical tests to discriminate between different samples showing positive indications that discrimination
between disturbed and undisturbed soil is potentially possible using statistical methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.