PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
In this paper we discuss an approach to solving sensor and information fusion problems using the maximum likelihood adaptive neural system (MLANS). This neural network combines a model-based approach with optimal statistical techniques to achieve adaptivity in the fusion of the data. In this neural network, the weights are fuzzy measures associating each piece of information with various decision classes. This permits the fusion of data from various sources, and on various levels. These levels include measurements, features, decisions such as subjective probabilities obtained from external sources, or other fuzzy measures of association. The weights are parameterized in terms of a relatively small number of model parameters. These parameters are estimated by the minimum entropy neuron and maximum likelihood neurons. The maximum likelihood neurons permit extremely fast learning of data distributions so that MLANS achieves the information-theoretic bounds on speed of adaptation and learning. Another related advantage of a model-based approach is the MLANS capability to combine self-learning with any available information, including a priori and a posteriori information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new neural network architecture for binary hypothesis testing is discussed. The network can utilize results from sensors making independent or dependent decisions (as well as any combination of binary data). Furthermore, it employs a novel structure, incorporating a set of trainable threshold values but no trainable weight values. The threshold values are trained using a minimum probability of error criterion, and only one threshold is modified for each training sample. Simulation results are presented comparing the performance of the network with that of the optimal parametric detector for the case of independent sensor decisions. These results show that for independent data, the performance of the net approaches that of the optimal parametric detector.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Taking advantage of both optical and microwave remote sensing images with their complementary information, image fusion becomes a valuable approach to provide and improve up-to-date space maps in frequently cloud covered areas such as the humid Tropics. Remotely sensed data from north of the Netherlands is processed for test and calibration purposes. Later on it is planned to implement the methodology in Indonesia; the selected research area is situated on Sumatra. This paper gives an overview about the existing techniques and presents first results of fusing ERS- 1 SAR data with SPOT and LANDSAT TM investigating different combinations of fusion techniques and input imagery in terms of orbit, looking angle and spectral/spatial resolution. It is anticipated to provide an optimized fusion approach which takes into account the parameters which influence the accuracy and information extraction possibilities of fused data in relation to mapping and map updating in tropical developing countries.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The relevance of data association is analyzed for the very crucial problem of detection. Statistical decision theory is applied in order to establish the optical detection test for a multisensor system. the optimal high threshold, optimal low threshold, and optimal linear tests are also derived. These four tests have been illustrated in the simplest case of a two-sensor system with a known and constant signal, and a modeling of the sensor noises. The decision regions are displayed and the performances are compared.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The evolutionary design, signal conditioning, performance prediction and validation with experimental data of robust centralized fusion algorithms (CFAs) that operate in clutter, with unspecified distribution, are presented. Two CFAs, called non-coherent integration and T-squared, followed by an adaptive constant false alarm post-processing along with several variants were evaluated. The fusion algorithms were designed to provide various degrees of robustness and inherent CFAR properties to Weibull and Lognormal clutter. Each algorithm's fusion performance, defined via receiver operating characteristics (ROCs), was compared and also compared to the ROCs of the individual sensors (both by Monte Carlo simulation and by the use of measured data with target in clutter). The test results with both target in the clear and target in clutter data are in concert with the theoretically predicted behavior.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses performance issues related to experimental validation data fusion systems. The paper considers the question of performance improvement as result of multisensor data fusion. More specifically, it addresses the question of whether it is feasible to design a fusion system so as to guarantee performance improvement beyond what is feasible by processing data from any single sensor only. Two theorems that provide an answer to the question and conditions under which such an improvement is feasible for centralized and distributed fusion are provided. Based on these two theorems, an optimal fusion architecture is discussed for predetection fusion. Shortcomings in applying the optimal fusion rules in the presence of partial statistical knowledge and means to overcome them are discussed. The need of data validation and adaptive sensitivity control in the fusion design, when optimality conditions are not satisfied, is demonstrated and suggestions for designing the feedback loop are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Previous work in data fusion has seen the development of a range of architectures for multisensor data fusion systems, from fully centralized through distributed to fully decentralized. This paper presents some results obtained from an implementation of a multitarget tracking system built around a fully decentralized Kalman filter (DKF). Explicit use is made of the information available locally to a sensor to control its pointing and target detection. The tracking system integrates an essentially range- only sensor with a bearing-only sensor, and the performance of the system is described in terms of both its ability to produce good tracks and its requirement for communications bandwidth. The sensors run asynchronously from each other, and also exhibit asynchronous first detection. Of particular importance in the way the individual sensors can use the information in the global picture to make decisions about which targets to observe. In the demonstration system, simple sensor management is achieved by fixating on the nearest (interesting) target. First we give some background and describe the decentralized data fusion test bed. Then we consider the realization of the decentralized information filter in terms of the ultrasonic and IR sensors used in our demonstration system. Finally, we draw some conclusions about system performance, and indicate some possible future work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The primary thrusts of the intelligent multisource, multisensor integration (IMMSI) effort are to formalize an approach to hypothesis-driven distributed sensor management, validate that approach, identify candidates for decision support, and investigate implementations of appropriate cognitive processing modules. Using the existing manual voice communication-based cooperative process as a model, a coherent suite of human-machine interfaces, data communication protocols, and decision aids are being developed with the goal of real-time global optimal sensor allocation within the mission context. The Knowledgeable Observer And Linked Advice System (KOALAS) architecture provides a framework for constructing the operator-inductive/machine- deductive IMMSI system. The machine continuously updates a model of the environment from both local and remote sensor data. The operator interacts with the system by evaluating the perceived model and tuning it through the introduction of hypotheses. These hypotheses, also shared among platforms, provide cues for sensor management. The evolving sensor allocation provides new data for the model and a closed-loop intelligent control system is created. The cooperative agent paradigm provides a cognitive model for the IMMSI distributed sensor management process. In a typical cooperative task the common goal is achieved by the agents performing discrete transactions on a shared system state vector. Within the tactical environment, however, centralization of data is neither desirable nor possible; hence, coherency of a distributed track, hypothesis, and global sensor allocation database is also an issue.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper present an information-theoretic approach to sensor management for multitarget tracking using a sensor that operates in one of two modes: a fast, low-resolution mode and a slow, high-resolution mode. The error correlations between nearby target pairs, the sensor rates, the sensor resolutions and the target plant noise all play a role in the optimum choice of mode. The error correlations occur in the target location estimates even when the individual measurement errors are uncorrelated, as in the model considered here. When a filter that models these error correlations is used, such as event-averaged maximum likelihood estimation, a sensor management strategy can be developed to reduce them. This is illustrated with a model two- target problem. In the model problem, the target plant noise is such that the low resolution mode produces the optimum result when the targets are widely separated, due to its higher report rate. If the error correlations are not modeled, then over a certain parameter range the low resolution mode would be selected for all target separations. When the effect of error correlations is included, it is shown the slow, high resolution mode produces a better result when the targets are close together. This suggests that systems that must track closely spaced targets could benefit from adaptively adjusting their integration times based on target plant noise and separation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A methodology for designing an optimum fuzzy tracker is presented. The method uses genetic algorithms and is based upon minimizing a weighted combination of performance criteria. The resulting fuzzy tracker gives a performance that is superior to that of traditional trackers and shows a marked improvement over fuzzy trackers designed without the use of genetic algorithms. An application is provided to illustrate the effectiveness of the methodology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Currently, detection, tracking, and classification functions are performed sequentially, for example, tracks are initiated after objects have been detected. This leads to utilization of only partial information for each of the surveillance functions: the information about the object being on a specific track is not utilized for the object detection. This, in turn, leads to unnecessary limitations on system performance or to stringent and expensive sensor requirements. We have developed a novel approach to enhancing surveillance functions by combining several functions and by utilizing all the available information for each function, based on the maximum likelihood adaptive neural network (MLANS). The MLANS capability for a general model-based processing permits combining such functions as data correlation, detection, track estimation, and classification. In this application, a generic MLANS architecture implements a model that combines classification model based on statistical distributions of object features with the dynamical model of object motion. The MLANS learning mechanism results in a maximum likelihood estimation of the model parameters, yielding concurrent estimates of data association probabilities, track parameters, and object classification. This novel approach to tracking results in a dramatic improvement of performance: the MLANS tracking exceeds performance of existing tracking algorithms due to optimal utilization of all the available data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Westinghouse has developed and demonstrated a system that performs multisensor detection and tracking of tactical ballistic missiles (TBM). Under a USAF High Gear Program, we developed knowledge-based techniques to discriminate TBM targets from ground clutter, air breathing targets, and false alarms. Upon track initiation the optimal estimate of the target's launch point, impact point and instantaneous position was computed by fusing returns from noncollocated multiple sensors. The system also distinguishes different missile types during the boost phase and forms multiple hypotheses to account for measurement and knowledge base uncertainties. This paper outlines the salient features of the knowledge-based processing of the multisensor data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic target recognition makes it possible to provide weapons with lock-on-after-launch (LOAL) capability. For such applications, characterization of digital images for real-time application is required. One method for this has been partially achieved using a modification of Karhunen-Loueve transformation. This technique makes unsupervised lower-dimensional characterization feasible, so as to be able to locate sensor position in a digital image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Edge-data compression is concerned with obtaining a representation of an edge while preserving its shape, continuity and smoothness with fewer data points than required for a bitmap. Achieving a very high degree of edge-data compression without much loss in shape, continuity and smoothness is an extremely difficult problem in edge representation. Bezier polynomials have parametric form of equations and are frequently used in computer graphics for interactive generation of smooth curves. Recent studies have resulted in algorithms for an approximate solution to the edge fitting problem using Bezier polynomials. It is shown that application of these algorithms to edges results in a mathematical model capable of a very high degree of data compression. Suitable choices of optimality criteria and applications are also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Matching of occluded objects is a difficult problem. Moreover, the problem is more difficult when scale-invariant matching is needed. A scale invariant representation is essential for this application. In this paper, we propose using the wavelet transform of a boundary to obtain a scale-invariant representation. We use the cubic B-spline as a smoothing function of the wavelet transform since the B-spline is analytically well defined and simple to implement. We implement the fast continuous wavelet transform by using a dyadic wavelet decomposition and dilated B-splines. As a result of using the wavelet transform, we obtain boundaries at various scales while using a small number of data points. The existing scale-space image approaches are not effective for occluded object matching since they use a normalized x-axis and too many data points. We propose a new scale-invariant representation similar to the scale-space image. The representation is generated by locating zero-crossings of the curvature function of boundaries at only the scales where the number of zero-crossings is changing. We scale the x-axis for each scale instead of using the same normalization for all scales. The proposed representation is scale-invariant and appropriate for scale-invariant matching with occlusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Two algorithms were developed to detect linear or quasi-linear feature in images, one using the windowed Radon transform, and the other using the Hough transfrom. Two images from different types of sensors and with different features were processed using the two algorithms, and the results are compared with each other. The algorithm of the Radon transform yielded a reasonable noise reduction, and retained the major linear features. While using the Hough transform, linear feature plots containing different sets of line segments by taking different combinations of thresholds can be obtained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method for the recognition of moving objects from a sequence of time-varying images is presented. The method consists of two phases: an estimation phase of optical flow field and an interpretation phase where a qualitative analysis of optical flow patterns is performed. The two phases interact each other in order to provide a final map in which areas of the image interested by the same motion are isolated and classified. For the estimation phase a gradient-based approach has been selected, that provides a linear optical flow map. In the interpretation phase the optical flow field is regarded as a 2D linear system of differential equations and then the geometric theory of differential equations is used. The whole algorithm is implemented by means of an Hopfield neural network. Experimental results on synthetic images are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the design and implementation of an inspection system that detects and classifies flaws in uniform web materials. The first part of this paper describes a general procedure for designing such an inspection system. The second part concentrates on a case study and details the specific algorithms and results. The overall inspection system design incorporates five subsystems: sensing, flaw detection, flaw characterization, feature analysis, and classification. The case study involves flaws consisting of bloblike structural elements; a specific spatial arrangement of the blobs is the major characteristics of the flaw class under study. The emphasis of the paper is on the recognition of the flaws independently of their position, orientation and size. The study incorporates analysis on synthetically generated flaws as well as the ones acquired on the production line.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Signal Processing and Sensor Fusion in the Midcourse Space Experiment
This paper describes the MSX program objectives, target missions, data management architecture, and organization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The principal focus of MSX is to collect target and background phenomenology data is support of a variety of civilian science objectives in earth and atmospheric remote sensing and astronomy. This paper describes the MSX spacecraft and instrumentation, and summarizes the planned observations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses the postflight ground processing algorithms for the Midcourse Space Experiment (MSX) Spatial Infrared Imaging Telescope (SPIRIT) III radiometer data. The algorithm suite consists of image processing and object tracking that produces object histories for postflight signature studies. The image processing is centered around least squares estimators for single and multiple objects for classification and characterization of object parameters. Object-tracking algorithms are implemented to support association of objects across multiple scans by utilizing an adaptive nearest neighbor association technique. A discussion of the design requirements and performance of the implemented algorithms will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Following a brief introduction of mathematical morphological operations a novel architecture called boundary scan processor is introduced and its application for binary and gray-scale morphology is demonstrated. The hardware complexity of the processor is analyzed and compared with other recently published architectures. Typical examples in space (time) variant and adaptive (data dependent) morphology are also given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The beam characteristics of a laser depend on various factors such as temperature, mechanical deficiencies of mounts, tolerance specifications, etc. As such, there is a tendency for the beam characteristics to deviate from the desired characteristics. This paper describes the development of a fuzzy-logic based controller to obtain and maintain specific output beam characteristics of an optical resonator.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An optical parametric oscillator (OPO) produces coherent optical radiation which is tunable over a wide range. In this paper, a novel technique for angle tuning an OPO by an acousto-optic Bragg cell is discussed. It is shown that the proposed scheme provides course as well as fine tuning of signal wavelengths with high speed, and hence is a promising alternative to the conventional tuning techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes new optical butterfly interconnection network constructions based on the butterfly signal-flow diagrams for performing 1D and 2D fast Walsh-Hadamard transforms. We build the optical butterfly interconnection network hardware systems by using the binary phase diffraction gratings and masks. The systems are simple, regular in constructions, and easily implemented by means of gratings and masks. They relate directly optical processing with mathematical calculation, and are conveniently controlled and adjusted. These important characteristics have been verified by computer simulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, the carry-free property of modified-signed-digit (MSD) addition is analyzed with a space position logic encoding scheme. On this basis, MSD multiplication is discussed and a fast MSD multiplication system composed of optoelectronic logic technology and a multilayer optical interconnection architecture is propsed and studied. Finally, the effectivity of the fast MSD multiplication system is demostrated by using a 2X2 bit multiplication example and experimental results are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A thresholding technique using relative entropy is proposed for vapor cloud detection. The idea is to cast a detection problem as a thresholding problem where the relative entropy is chosen to be the detection criterion and the null and alternative hypotheses correspond to background and objects respectively. Since the information content in an image can be characterized by its entropy, the original image and the thresholded bilevel image can be viewed as two sources. As a result, the relative entropy becomes a natural measure to describe the mismatch between these two images. The smaller the relative entropy, the better the matching between the two images. In this paper, we interpret detection problems as image thresholding problems, where the null hypothesis corresponds to noise only and the alternative hypothesis represents presence of target. Three methods based on relative entropy are presented for chemical vapor cloud detection. The experimental results show that the suggested relative entropy-based methods can detect a vapor cloud very effectively. The performance is also compared against two recently developed entropic thresholding techniques, the local entropy and joint entropy proposed by S.R. Pal and S.K. Pal and shows that the relative entropy-based method outperform Pal and Pal's methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this research likelihood ratio detection algorithms are derived for general types of speckle densities and receiver noise models. It is shown that the mixture formulation leads naturally to a discard decision rule, in which certain pixels must be discarded (without replacement) prior to detection processing. Hence the scene processor involves a prefilter that passes over the data to apply the discard rule. Rules for determining the discard operation are derived, and shown to depend on the specific models for the speckle, signal, and noise. It is also shown that the discard filter does not simply make a binary decision concerning the presence or absence of speckle at each pixel point. Comparison of the discard filter to windowed median filters and hard limiting filters are shown. The work is applicable primarily to optical imaging sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Methods for automatic detection of target areas from SAR images have been investigated. Algorithms have been developed for extraction of regions such as lakes and urban areas and linear features such as roads, rivers, bridges. These methods, at present, are mostly based on gray level and gradient thresholding schemes. A comparision of the experimental results is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the basic problems in pattern recognition is the detection of a pattern in noise. This problem becomes particularly difficult if the spectral content of the signal and noise overlap. In this case, high levels of noise make detection of the signal very difficult. Noise cancellation using adaptive filters has been successful when the characteristics of the noise source and the signal are known. Another problem in pattern recognition involves recognizing the same pattern in different spatial positions. Some special high order neural networks have been shown to exhibit positional invariance, but these systems do not work well in noisy environments. The combined problem of identifying a target that varies in position and is embedded in noise has been approached by cascading systems that attempt to remove the noise and then detect the target with positionally invariant systems. In this paper, a number of different approaches to detecting a specific, translational target in noise are examined and compared. These techniques include, among others, adaptive filtering and a higher order neural network. The higher order neural network incorporates both translational invariance and noise reduction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The aim of this presentation is to address a new theoretic approach to the problem of the development of remote sensing imaging (RSI) nonlinear techniques that exploit the idea of fusion the experiment design and statistical regularization theory-based methods for inverse problems solution optimal/suboptimal in the mixed Bayesian-regularization setting. The basic purpose of such the information fusion-based methodology is twofold, namely, to design the appropriate system- oriented finite-dimensional model of the RSI experiment in the terms of projection schemes for wavefield inversion problems, and to derive the two-stage estimation techniques that provide the optimal/suboptimal restoration of the power distribution in the environment from the limited number of the wavefield measurements. We also discuss issues concerning the available control of some additional degrees of freedom while such an RSI experiment is conducted.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a timed neural net (TNN) approach to the problem of recognition of moving targets. We consider a synchronous timed Petri net (TPN) as a model for this timed neural net. In a TPN the transitions are enabled and fired by using a 'time' token. A group of place nodes and their corresponding transition nodes model a neuron in a TNN. In order to classify the type of motion that a moving target is executing, we look upon an image sequence as a single image evolving in time. The reachability set, R(t) at any instant of time represents a snapshot of the weight matrix of a static neural net recognizing the target. The motion classification is achieved by analyzing R(t). An example illustrating the approach is constructed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study uses simulated data from a set of two-band sensors and a set of three-band sensors. There are dozens of warheads and numerous decoys simulated (several decoys for each warhead). For a large part of the scenario, the objects are so close that individual targets cannot be resolved by the sensors, even though a single object's infrared signature could readily be detected. In those cases, multiple objects are seen as a single object, with the summed intensity of several objects. A BODE discrimination technique, which fits a quadratic and a sinusoid to the infrared time histories, is used to attempt to distinguish the warheads from the decoys. The average coefficients of the curve fit, along with their covariances, are used as features which describe the two object sets (warheads and decoys). Warheads and decoys can be readily distinguished once objects are far enough apart so that no multiple objects are mistaken as single objects. But when a cluster of objects appears as one object on the sensor focal plane, it is apparently impossible to tell whether or not a warhead is present in the cluster.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The research presented in this paper is focused on the development of mathematical foundations and algorithm for locating multiple targets on film images. A locator algorithm with firm mathematical foundations has been developed to locate multiple targets. The algorithm is based upon the assumption that an image is composed of a mixture of component distributions, stemming from the different background and target regions in the image. Using both a maximum likelihood estimator and iterative clustering algorithm, a locator algorithm was developed to separate the mixture into its major components. Having the components of the mixture, objects are located by classifying the components of mixture and recognizing the targets based upon their size and shape. The algorithm has been tested on a set of images that were selected based upon anticipated difficult situations. The algorithm has demonstrated the ability to locate multiple objects in noisy and cluttered background scenes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes applications of maximum likelihood adaptive neural system (MLANS) to the characterization of clutter in IR images and to the identification of targets. The characterization of image clutter is needed to improve target detection and to enhance the ability to compare performance of different algorithms using diverse imagery data. Enhanced unambiguous IFF is important for fratricide reduction while automatic cueing and targeting is becoming an ever increasing part of operations. We utilized MLANS which is a parametric neural network that combines optimal statistical techniques with a model-based approach. This paper shows that MLANS outperforms classical classifiers, the quadratic classifier and the nearest neighbor classifier, because on the one hand it is not limited to the usual Gaussian distribution assumption and can adapt in real time to the image clutter distribution; on the other hand MLANS learns from fewer samples and is more robust than the nearest neighbor classifiers. Future research will address uncooperative IFF using fused IR and MMW data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Noise is a common problem for imaging sensors. Multiframe averaging is a natural way to improve the SNR. This will inherently improve the performance of an automatic target recognizer (ATR) that has to use the imagery provided by that sensor. When the ATR is located on a weapon platform that is rapidly approaching a set of potential targets, it is appropriate to use a weighted moving average. The more recent frames are collected at ranges closer to the targets than the older frames, so they should receive more weight in the moving average. Exponential smoothing is nothing more than a weighted moving average that uses weighting factors based on the geometric progression. The advantage to using exponential smoothing is that it can be implemented in hardware for real-time applications. This paper will show that an exponentially weighted moving average is equivalent to an exponential smoothing technique. Lastly, we will describe the way in which exponential smoothing can be used in a real-time ATR to improve performance for very little cost, without a time penalty for signal processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Pattern recognition systems have been developed using a variety of technologies from many disciplines. With the development of new technologies in these disciplines comes the possibility for improving pre-existing recognition systems. Such is the case of using object-oriented programming concepts from computer science in object recognition applications. Efficient object recognition imposes new requirements to the library (database) of objects. The library has to go beyond the role of a simple storage medium and provide efficient retrieval and management capabilities for the objects' information. An entity can be stored in an object structure along with its descriptive attributes or features. In identifying an unknown object, the object recognition system queries the database by the passage of messages checking the similarities of the unknown object to each of the objects in the database on a feature basis. The only interface the database shares with the object recognition system is through the passing of messages which allows for flexibility in how the database processes the messages. This is only one of many advantages of the use of an object-oriented database in an object recognition system. An object recognition system utilizing object-oriented concepts is developed in detail.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Substantial performance gains (classification speed) are obtained in object recognition systems for large object databases if the database is pre-screened. This pre-screening process is carried out by applying successive screening filters to the database to obtain a reduced set (candidate) of objects. These filters operate on object features eliminating those objects whose features do not resemble in some sense the unknown object. The result is a reduced set of objects over which, measures of similarity are applied to obtain the unknown object's final classification. It has been observed that the order in which these screening filters are applied to the database has a noticeable effect on the size of the resulting candidate set. Additionally, the way that a particular feature partitions the pattern space (number of partitions) and the distribution of the pattern classes among the different partitions, have also substantial effects on the size of the resulting candidate set. This paper investigates the classification performance variations for different feature ordering schemes as well as the effects of the pattern distributions on the partitions in relation to the filter ordering. Experimental results showing the effects of different combinations of feature ordering and pattern partition distribution are also included.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a parameter-free procedure applicable to the detection of a signal in some element of a multiple-resolution- element radar is described. The procedure is based on parameter- free statistics obtained from the generalized maximum likelihood ratio and allows one to eliminate the unknown parameters from the problem. This procedure can be employed whenever the parameters of the distribution of the signal data and no-signal data are unknown. It allows one to find an adaptive test that adjusts itself to the level of the recent clutter and to improve target detectability over various clutter, such as ground, sea and weather clutter. One advantage of the adaptive test is that, over a wide class of no-signal environments, the false alarm rate remains the same. In other words, the adaptive test is able to achieve a fixed probability of a false alarm, which is invariant to intensity changes in the noise background. Also, no learning process is necessary in order to achieve the constant false alarm rate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.