This paper describes an integrated approach to sensor fusion and resource management applicable to sensor networks.
The sensor fusion and tracking algorithm is based on the theory of random sets. Tracking is herein considered to be the
estimation of parameters in a state space such that for a given target certain components, e.g., position and velocity, are
time varying and other components, e.g., identifying features, are stationary. The fusion algorithm provides at each
time step the posterior probability density function, known as the global density, on the state space, and the control
algorithm identifies the set of sensors that should be used at the next time step in order to minimize, subject to
constraints, an approximation of the expected entropy of the global density. The random set approach to target tracking
models association ambiguity by statistically weighing all possible hypotheses and associations. Computational
complexity is managed by approximating the posterior Global Density using a Gaussian mixture density and using an
approach based on the Kulbach-Leibler metric to limit the number of components in the Gaussian mixture
representation. A closed form approximation of the expected entropy of the global density, expressed as a Gaussian
mixture density, at the next time step for a given set of proposed measurements is developed. Optimal sensor selection
involves a search over subsets of sensors, and the computational complexity of this search is managed by employing the
Mobius transformation. Field and simulated data from a sensor network comprised of multiple range radars, and
acoustic arrays, that measure angle of arrival, are used to demonstrate the approach to sensor fusion and resource
management.
This paper investigates methods of decision-making from uncertain and disparate data. The need for such methods arises in those sensing application areas in which multiple and diverse sensing modalities are available, but the information provided can be imprecise or only indirectly related to the effects to be discerned. Biological sensing for biodefense is an important instance of such applications. Information fusion in that context is the focus of a research program now underway at MIT Lincoln Laboratory. The paper outlines a multi-level, multi-classifier recognition architecture developed within this program, and discusses its components. Information source uncertainty is quantified and exploited for improving the quality of data that constitute the input to the classification processes. Several methods of sensor uncertainty exploitation at the feature-level are proposed and their efficacy is investigated. Other aspects of the program are discussed as well. While the primary focus of the paper is on biodefense, the applicability of concepts and techniques presented here extends to other multisensor fusion application domains.
Well-chosen background models are critical to accurately predict the performance of hyperspectral detection and classification algorithms and to evaluate the effects on system performance of variation in environmental or sensor parameters. Such models also have implications for the derivation of optimal algorithms. First-principal physical models and statistical models have been developed for these purposes. However, in many circumstances these models may not accurately represent hyperspectral data that are complicated by intra-class variability and subpixel mixing of materials as well as atmospheric, illumination, temperature (in the emissive regime) and sensor effects. In this paper we propose a statistical representation of hyperspectral data defined by class parameters and an abundance probability distribution. Various representations of the probability distribution function of the abundance values are developed and compared with data to determine if the estimated abundance distributions and intra-class variation explain the observed heavy tails in the data. The consequences of the Gaussian endmembers of the normal compositional model violating the non-negativity constraint are also investigated.
The normal compositional model (NCM) simultaneously models subpixel mixing and intra-class variation in multidimensional imagery. It may be used as the foundation for the derivation of supervised and unsupervised classification and detection algorithms. Results from applying the algorithm to AVIRIS SWIR data collected over Cuprite, Nevada are described. The NCM class means are compared with library spectra using the Tetracorder algorithm. Of the eighteen classes used to model the data, eleven are associated with minerals that are known to be in the scene and are distinguishable in the SWIR, five are identified with Fe-bearing minerals that are not further classifiable using SWIR data, and the remaining two are spatially diffuse mixtures. The NCM classes distinguish (1) high and moderate temperature alunites, (2) dickite and kaolinite, and (3) high and moderate aluminum concentration muscovite. Estimated abundance maps locate many of the known mineral features. Furthermore, the NCM class means are compared with corresponding endmembers estimated using a linear mixture model (LMM). Of the eleven identifiable (NCM class mean, LMM endmember) pairs, ten are consistently identified, while the NCM estimation procedure reveals a diagnostic feature of the eleventh that is more obscure in the corresponding endmember and results in conflicting identifications.
The normal compositional model (NCM) is a descriptive model that explicitly accounts for sub-pixel mixing and random variation of the spectrum of a material. In this paper the normal compositional model, defined in an earlier work, is extended to include an additive term that may represent path radiance and additive sensor noise. If the covariance matrix of the additive term is non-singular, as may be assumed since it includes the covariance matrix of the additive noise, the covariance matrix of the other classes need not be non-singular. Thus the current model synthesizes the linear unmixing and Gaussian clustering algorithms. Anomaly and matched target detection algorithms based on these three models are compared using ocean hyperspectral imagery, and for these data the NCM approach reduces the false alarm probability by more than an order of magnitude. The linear mixture and normal compositional models separate surface reflections and upwelling light more effectively than the Gaussian clustering algorithm. Furthermore, greater inter-band correlation is estimated using the subpixel covariance estimation methodology than using the pure pixel modeling approach.
Hyperspectral data are often modeled using either a linear mixture or a statistical classification approach. The linear mixture model describes each spectral vector as a constrained linear combination of end-member spectra, whereas the classification approach models each spectra as a realization of a random vector having one of several normal distributions. In this work we describe a stochastic compositional model that synthesizes these two viewpoints and models each spectra as a constrained linear combination of random vectors. Maximum likelihood methods of estimating the parameters of the model, assuming normally distributed random vectors, are described, and anomaly and likelihood ratio detection statistics are defined. Detection algorithms derived from the classification, linear mixing, and stochastic compositional models are defined. Detection algorithms derived from the classification, linear mixing, and stochastic compositional models are compared using data consisting of ocean hyperspectral imagery to which the signature of a personal flotation device has been added at pixel fill fractions (PFF) of five and ten percent. These results show that detection algorithms based on the stochastic compositional model may significantly improve detection performance. For example, this study shows that, at a 5% PFF and a probability of detection of 0.8, the false alarm probabilities of anomaly and likelihood detection algorithms based on the stochastic compositional model are more than an order of magnitude lower than the false alarm probabilities of comparable algorithms based on either a linear unmixing algorithm or a Gaussian mixture model.
KEYWORDS: Signal to noise ratio, Transform theory, Signal attenuation, Detection and tracking algorithms, Quantization, Signal detection, Sensors, Optical filters, Hyperspectral imaging, Image filtering
Hyperspectral images may be collected in tens to hundreds of spectral bands having band widths on the order of 1-10 nanometers. Principal component (PC), maximum-noise-fraction (MNF), and vector quantization (VQ) transforms are used for dimension reduction and subspace selection. The impact of the PC, MNF, and VQ transforms on image quality are measured in terms of mean-squared error, image-plus-noise variance to noise variance, and maximal-angle error, respectively. These transforms are not optimal for detection problems. The signal-to-noise ratio (SNR) is a fundamental parameter for detection and classification. In particular, for additive signals in a normally distributed background, the performance of the matched filter depends on SNR, and the performance of the quadratic anomaly detector depends on SNR and the number of degrees-of-freedom. In this paper we demonstrate the loss in SNR that can occur from the application of the PC, MNF, and VQ transforms. We define a whitened-vector-quantization (WVQ) transform that can be used to reduce the dimension of the data such that the loss in SNR is bounded, and we construct a transform (SSP) that preserves SNR for signals contained in a given subspace such that the dimension of the image of the transform is the dimension of the subspace.
KEYWORDS: Sensors, Signal to noise ratio, Signal attenuation, Data modeling, Reflectivity, Remote sensing, Data compression, Optical filters, Image processing, Detection and tracking algorithms
Multispectral and hyperspectral sensors are being used for remote sensing and imaging of ocean waters. Many applications require the compression of hyperspectral data to achieve real-time transmission or exploitation. Hyperspectral data compression or reduction has been accomplished using techniques based upon principal component analysis or linear unmixing. Alternatively, data compression (reduction) may be performed by band selection, or band selection may be preliminary to either of the other compression techniques. Band selection also has implications for sensor design and the stability of estimates of processing parameters. In this study, we address the question of which bands are the most efficacious for imaging submerged objects, such as whales, using an anomaly detector, or a matched filter. Bands are selected by optimizing a detection criterion subject to a constraint on the number of bands. The technique is applied to give hyperspectral data sets, and the optimum bandwidths and centers are determined. The loss in performance from selecting reduced numbers of bands is tabulated and the need for adaptively selecting reduced numbers of bands is demonstrated.
KEYWORDS: Signal detection, Sensors, Signal to noise ratio, Interference (communication), Data modeling, Radar, Target detection, Expectation maximization algorithms, Signal processing, Statistical modeling
Sea clutter amplitude is often modeled as a compound random variable Z equals AX, where A is a positive valued random variable and X has a Rayleigh distribution. The K, class A, and discrete Rayleigh mixture distributions can be derived from these assumptions. Moreover, successive values of A may be correlated. If A is modeled as a finite Markov process, Z is described by a hidden Markov model (HMM). The applicability of Rayleigh mixture and hidden Markov models to RADAR sea clutter is demonstrated empirically. Amplitude only and phase coherent detection statistics are derived from these models using locally optimal and likelihood ratio techniques. Robust implementations of the locally optimal processor based on the Rayleigh mixture model have been developed, and empirical ROC curves demonstrate performance improvement of up to 9 dB in comparison with a CFAR detector for small targets in sea clutter. In a test case, the locally optimal hidden Markov detector is then shown to offer an additional 3 dB over the Gaussian mixture detector. Further examples compare the amplitude and phase coherent hidden Markov detectors with CFAR and Doppler processors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.