KEYWORDS: Sensors, Video, Global Positioning System, Video processing, Cameras, Telecommunications, Data acquisition, Receivers, Binary data, Data communications
Implanted mines and improvised devices are a persistent threat to Warfighters. Current Army countermine missions for route clearance need on-the-move standoff detection to improve the rate of advance. Vehicle-based forward looking sensors such as electro-optical and infrared (EO/IR) devices can be used to identify potential threats in near real-time (NRT) at safe standoff distance to support route clearance missions. The MOVERS (Micro-Cloud for Operational, Vehicle-Based EO-IR Reconnaissance System) is a vehicle-based multi-sensor integration and exploitation system that ingests and processes video and imagery data captured from forward-looking EO/IR and thermal sensors, and also generates target/feature alerts, using the Video Processing and Exploitation Framework (VPEF) “plug and play” video processing toolset. The MOVERS Framework provides an extensible, flexible, and scalable computing and multi-sensor integration GOTS framework that enables the capability to add more vehicles, sensors, processors or displays, and a service architecture that provides low-latency raw video and metadata streams as well as a command and control interface. Functionality in the framework is exposed through the MOVERS SDK which decouples the implementation of the service and client from the specific communication protocols.
The scope of the Micro-Cloud for Operational, Vehicle-Based EO-IR Reconnaissance System (MOVERS) development effort, managed by the Night Vision and Electronic Sensors Directorate (NVESD), is to develop, integrate, and demonstrate new sensor technologies and algorithms that improve improvised device/mine detection using efficient and effective exploitation and fusion of sensor data and target cues from existing and future Route Clearance Package (RCP) sensor systems. Unfortunately, the majority of forward looking Full Motion Video (FMV) and computer vision processing, exploitation, and dissemination (PED) algorithms are often developed using proprietary, incompatible software. This makes the insertion of new algorithms difficult due to the lack of standardized processing chains. In order to overcome these limitations, EOIR developed the Government off-the-shelf (GOTS) Video Processing and Exploitation Framework (VPEF) to be able to provide standardized interfaces (e.g., input/output video formats, sensor metadata, and detected objects) for exploitation software and to rapidly integrate and test computer vision algorithms. EOIR developed a vehicle-based computing framework within the MOVERS and integrated it with VPEF. VPEF was further enhanced for automated processing, detection, and publishing of detections in near real-time, thus improving the efficiency and effectiveness of RCP sensor systems.
In this paper, we present a vehicular buried threat detection approach developed over the past several years, and its latest implementation and integration in VPEF environment. Buried threats have varying signatures under different operation environment. To reliably detect the true targets and minimizing the number of false alarms, a suite of false alarm mitigators (FAMs) have been developed to process the potential targets identified by the baseline module. A vehicle track can be formed over a number of frames and targets are further analyzed both spatially and temporally. Algorithms have been implemented in C/C++ as GStreamer plugins and are suitable for vehicle mounted, on-the-move realtime exploitation.
Rain affects the thermal properties of soil and the temperature of soils and buried targets in the penetration depth of the
water. This work involves using predictions from the Countermine Computational Test Bed, (CTB), a 3-D finite
element model that accounts for coupled heat and moisture transfer in soil and targets. The meteorological data set used
in this work is one day of a meteorological data set, repeated over 3 days. The repeated meteorological data set is
required to isolate the effects of rain. The CTB is used to predict and compare surface and subsurface soil and target
temperatures with and without rain. The meteorological data set contains 24hrs without rain, followed by 14 hrs of rain
at a precipitation rate of 5 mm/hr, then 10 hours plus 1 subsequent day without rain and with no cloud cover.
KEYWORDS: Clouds, Meteorology, Soil science, Solar radiation, Infrared radiation, Detection and tracking algorithms, 3D acquisition, 3D modeling, Finite element methods, Solar radiation models
Cloud cover affects direct and diffuse solar radiation and IR downwelling, and the values for these 3 components are
calculated using measured meterological data and varying the values of cloud cover and cloud type using algorithms
from the literature. The effects of these 3 transient forcing function components on surface, subsurface and target
interior temperatures are studied in this work. The cloud cover effects are isolated from the varying multi-day diurnal
cycles by repeating the meteorological data for 1 day. Cloud cover is a subgrid variable and hence, is often reported as 0
or 100%. This study includes a comparison of the effects of these two cloud cover values on a single geographical
location for 6 days, with each day repeating the meterological conditions of day 1. This work involves using predictions
from the Countermine Computational Test Bed (CTB), a 3D finite element model that accounts for coupled heat and
moisture transfer in soil and targets.
The US Army's RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD), Countermine
Division is evaluating the compressibility of airborne
multi-spectral imagery for mine and minefield detection
application. Of particular interest is to assess the highest image data compression rate that can be afforded without the
loss of image quality for war fighters in the loop and performance of near real time mine detection algorithm. The
JPEG-2000 compression standard is used to perform data compression. Both lossless and lossy compressions are
considered. A multi-spectral anomaly detector such as RX (Reed & Xiaoli), which is widely used as a core
algorithm baseline in airborne mine and minefield detection on different mine types, minefields, and terrains to identify
potential individual targets, is used to compare the mine detection performance. This paper presents the compression
scheme and compares detection performance results between compressed and uncompressed imagery for various level
of compressions. The compression efficiency is evaluated and its dependence upon different backgrounds and other
factors are documented and presented using multi-spectral data.
The fundamental challenges of buried mine detection arise from the fact that the mean spectral signatures of disturbed soil areas that indicate mine presence are nearly always very similar to the signatures of mixed background pixels that naturally occur in heterogeneous scenes composed of various types of soil and vegetation. In our previous work, we demonstrated that MWIR images can be used to effectively detect the buried mines. In this work, we further improve our existing method by fusing multiple buried mine classifiers. For each target chip extracted from the MWIR image, we scan it in three directions: vertical, horizontal, and diagonal to construct three feature vectors. Since each cluster center represents all pixels in its cluster, the feature vector essentially captures the most significant thermal variations of the same target chip in three directions. In order to detect the buried mines using our variable length feature vectors, we have applied Kolmogorov-Smirnov (KS) test to discriminate buried mines from background clutters. Since we design one KS-based classifier for each directional scan, for the same target chip, there will be a total of three classifiers associated with vertical, horizontal, and diagonal scans. In our system, these three classifiers are applied to the same target chip, resulting in three independent detection results, which are further fused for the refined detection. Test results using actual MWIR images have shown that our system can effectively detect the buried mines in MWIR images with low false alarm rate.
Traditional landmine detection techniques are both dangerous and time consuming. Landmines can be square, round, cylindrical, or bar shaped. The casing can be metal, plastic, or wood. These characteristics make landmine detection challenging. We have developed new methods that improve the performance of both surface and buried mine detection. Our system starts with the image segmentation based on a wavelet thresholding algorithm. In this method, we estimate the thresholding value in the wavelet domain and obtain the corresponding thresholding value in the image domain via inverse discrete wavelet transform. The thresholded image retains the pixels associated with mines together with background clutter. To determine which pixels represent the mines, we apply an adaptive self-organizing maps algorithm to cluster the thresholded image. Our surface mine classifiers are based on Fourier Descriptor and Moment Invariant to explore the geometric features of surface mines shown in the MWIR images. Our buried mine classifier utilizes the cluster intensity variations. To do this, we first cluster the target chip using a 3D unsupervised clustering algorithm. We then perform horizontal scanning to build a cluster intensity variation profile which is statistically compared with the signature profiles via Kolmogorov-Smirnov hypothesis test.
A significant amount of airborne data has been collected in the past and more is expected to be collected in the future to support airborne landmine detection research and evaluation under various programs. In order to evaluate mine and minefield detection performance for sensor and detection algorithms, it is essential to generate reliable and accurate ground truth for the location of the mine targets and fiducials present in raw imagery. The current ground truthing operation is primarily manual, which makes the ground truthing a time consuming and expensive exercise in the overall data collection effort. In this paper, a semi-automatic ground-truthing technique is presented which reduces the role of the operator to a few high-level input and validation actions. A correspondence is established between the high-contrast targets in the airborne imagery called the image features, and the known GPS locations of the targets on the ground called the map features by imposing various position and geometric constraints. These image and map features may include individual fiducial targets, rows of fiducial targets and triplets of non-collinear fiducials. The targets in the imagery are established using the RX anomaly detector. An affine or linear conformal transformation from map features to image features is calculated based on feature correspondence. This map-to-image transformation is used to generate ground-truth for mine targets. Since accurate and reliable flight-log data is currently not available, one-time specification of a few parameters like flight speed, flight direction, camera resolution and specification of the location of the initial frame on the map is required from the operator. These parameters are updated and corrected for subsequent frames based on the processing of previous frames. Image registration is used to ground-truth images which do not have enough high-contrast fiducials for reliable correspondence. A GUI called SemiAutoGT developed in MATLAB for the ground truthing process is briefly discussed. Results are presented for ground-truthing of the data collected under the Lightweight Airborne Multispectral Minefield Detection (LAMD) program.
Over the past several years, an enormous amount of airborne imagery consisting of various formats has been collected and will continue into the future to support airborne mine/minefield detection processes, improve algorithm development, and aid in imaging sensor development. The ground-truthing of imagery is a very essential part of the algorithm development process to help validate the detection performance of the sensor and improving algorithm techniques. The GUI (Graphical User Interface) called SemiTruth was developed using Matlab software incorporating signal processing, image processing, and statistics toolboxes to aid in ground-truthing imagery. The semi-automated ground-truthing GUI is made possible with the current data collection method, that is including UTM/GPS (Universal Transverse Mercator/Global Positioning System) coordinate measurements for the mine target and fiducial locations on the given minefield layout to support in identification of the targets on the raw imagery. This semi-automated ground-truthing effort has developed by the US Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD), Countermine Division, Airborne Application Branch with some support by the University of Missouri-Rolla.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.