PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Decision fusion benefits in a three-sensor suite environment, where in the sensor suite had only a single observation opportunity was assessed in a recent study. This study is extended here to the more general case wherein the sensors have multiple observation opportunities permitting in essence temporal fusion. In the earlier study, two different fusion system architectures were conceived. These architectures are (1) single stage--wherein the outputs from all the three sensors are fused simultaneously, and (2) dual-stage--wherein fusion occurs in two stages, first between two sensors, and next between this fused output and the third sensor. This study addresses the problem of temporal fusion, i.e., fusion across multiple observations, under the single-stage fusion system architecture, examining all the four fusion strategies identified in the previous study. The special case of matched sensors with identical performance characteristics is used to parametrically compare and contrast the asymptotic performances under the different strategies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider the problem of recognizing M objects using a fusion center with N parallel sensors. Unlike conventional M-ary decision fusion systems, our fusion system breaks a complex M-ary decision fusion problem into a sequence of simpler binary decision fusion problems. In our systems, a binary decision tree (BDT) is employed to hierarchically partition the object space at all system elements. The traversal of the BDT is synchronized by the fusion center. The sensor observations are assumed conditionally independent given the unknown object type. We use a greedy performance criterion in which the probability of error is minimized at individual nodes. Using this performance criterion, we characterize the optimal fusion rules and the optimal sensor rules. We compare our results with some important results on conventional one-stage binary fusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Error Correcting Output Coding (ECOC), an information theoretic concept, seems an attractive idea for improving the performance of automatic classifiers, particularly for problems that involve large number of classes. Converting a complex multi-class problem to a few binary problems allows the use of less complex learning machines, that are then combined by assigning the class according to closest distance to a code word defined by the ECOC matrix. We look at the conditions necessary for reduction of error in the ECOC framework and introduce a new version of ECOC called circular ECOC which is less sensitive to code word selection. To demonstrate the error reduction process and compare the two algorithms, we design an artificial benchmark on which we are able to control the rate of noise and visualize the decision boundary to investigate behavior in different parts of input space. Experimental results on a few popular real data bases are also presented to reinforce our conclusions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the paper we propose a new method to integrate the predictions of multiple classifiers for Data Mining and Machine Learning tasks. The method assumes that each classifier stands in it's own context, and the contexts are partially ordered. The order is defined by monotonous quality function that maps each context to the value from the interval [0,1]. The classifier that has the context with better quality is supposed to predict better than the classifier from worse quality. The objective is to generate the opinion of `virtual' classifier that stands in the context with quality equal to 1. This virtual classifier must have the best accuracy of predictions due to the best context. To do this we build the regression where each prediction is put with the weight, equal to quality evaluation of the context of the correspondent classifier. This regression will give us the best opinion in the point 1. Some experiments on the vowel recognition tasks showed validity of the approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Our research group is using chess as a vehicle for studying the fusion of adaptation, multiple representation, and search technologies for real-time decision making. Chess systems like Deep Blue have achieved Grandmaster chess play with a brute-force search of the game tree and human- supplied information, like piece-values and opening books. However, subtle aspects of chess, including positional features and advanced concepts, are not capable of being represented or processed efficiently with the standard method. Since 1989, Morph I-III have exhibited more autonomy and learning ability than traditional chess programs in `adaptive pattern-oriented chess'. Like its predecessors, Morph IV is a reinforcement learner, but it also uses a new technique we call pattern-level TD and Q-learning to mathematically map the state space and effectively learn to classify situations. Its three knowledge sources include two traditional ones: material and a piece-square table, and a new method called Distance. These are combined using a simple genetic algorithm and a decision tree. This paper shows the effectiveness of fusing knowledge to replace search in real-time situations, since an agent which combines all sources is capable of consistently beating an agent which employs any of the individual knowledge sources. Surprisingly, the pattern-level TD agent is slightly superior to the pattern-level Q-learning agent, despite the fact that the Q-learning agent updates more Q-values on each temporal step.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tracking algorithms can tell fairly reliable where target is heading. That is enough in civilian aviation, but in defence applications it might be not. Target's type and its hostility are at least as important. Normally, identification of type or friend or foe cannot be determined from target's kinematic information. To identify a target we also need other information. Every plane type has its own specialities e.g. we know that certain type has two engines which affects directly to heat of exhaust fumes. This kind of speciality is generally referred as an attribute information. Because attribute information is type depended, it must be modelled by an expert, who has beforehand knowledge of the target's causality relations. One of the best theories to get expert's knowledge into a tracking system is Bayesian networks. Bayesian networks is a model that describes relationships between attributes. In this paper we concentrate to identification problem. Question is how comprehension of the target's type changes with time when observations are corrupted by noise. We illustrate theory of Bayesian networks and explain its place in racking system. Finally we analyze performance of Bayesian networks in case where the problem is to identify targets from noisy data set.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Recognized Maritime Picture (RMP) is defined as a composite picture of activity over a maritime area of interest. In simplistic terms, building an RAMP comes down to finding if an object of interest, a ship in our case, is there or not, determining what it is, determining what it is doing and determining if some type of follow-on action is required. The Canadian Department of National Defence currently has access to or may, in the near future, have access to a number of civilians, military and allied information or sensor systems to accomplish these purposes. These systems include automatic self-reporting positional systems, air patrol surveillance systems, high frequency surface radars, electronic intelligence systems, radar space systems and high frequency direction finding sensors. The ability to make full use of these systems is limited by the existing capability to fuse data from all sources in a timely, accurate and complete manner. This paper presents an information fusion systems under development that correlates and fuses these information and sensor data sources. This fusion system, named Adaptive Fuzzy Logic Correlator, correlates the information in batch but fuses and constructs ship tracks sequentially. It applies standard Kalman filter techniques and fuzzy logic correlation techniques. We propose a set of recommendations that should improve the ship identification process. Particularly it is proposed to utilize as many non-redundant sources of information as possible that address specific vessel attributes. Another important recommendation states that the information fusion and data association techniques should be capable of dealing with incomplete and imprecise information. Some fuzzy logic techniques capable of tolerating imprecise and dissimilar data are proposed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes development and testing of a program that provides a quantitative metric for the comparison of night vision fusion algorithms. The user enters into the Metric Program the names of a thermal file, a vision file and the corresponding fused image file. The program assigns a fusion rating to the algorithm based on the following four quantitative tests: information content (ic), vision retention (vr), thermal retention (tr), and the bar to detect black segments. In ic the information content of the fused image is compared with a weighted sum of the vision and thermal images. In vr the number of faint lights that the fused image failed to incorporate is counted. In tr the number of pixels from the thermal file included in the fused image is determined. With some fusion algorithms if one of the sensors is blocked, a black segment appears in that area in the fused image, thus losing the information from the unblocked sensor. To test for this the Metric Program creates a thermal file with three horizontal black bars. The program then allows the user to call the executable file of the algorithm under test. Then the user is asked to examine the fused image. If three pitch-black horizontal bars appear on the image, the algorithm fails the test. While the bar test is invariant to the vision/thermal image pair used, the other tests are not. For this reason it is suggested that an algorithm should be tested with 5 or 6 different image pairs and a mean fusion rating calculated. The program is used to evaluate several different algorithms. Day vision fusion algorithms are also tested.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses the issue of objectively measuring the performance of pixel level image fusion systems. The proposed fusion performance metric models the accuracy with which visual information is transferred from the input images to the fused image. Experimental results clearly indicate that the metric is perceptually meaningful.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Challenge in the registration of battlefield images in visible and far-infrared bands is the feature inconsistency. We propose a contour-based approach for the registration and apply two free-form curve-matching algorithms: adaptive hill climbing and the iterative closest point algorithm. Both algorithms do not require explicit curve feature correspondence, are designed to be robust against outliers. We formulate the search as an adaptive hill climbing optimization for minimizing the partial Hausdorff distances. In the iterative closest point algorithm we choose the mean partial distance as the objective function, so that outliers can be easily handled by using rank order statistics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses the construction of photorealistic 3D models from multisensor data. The data typically comprises multiple views of range and color images to be integrated into a unified 3D model. The integration process uses a mesh-based representation of the range data and the advantages of the mesh-based approach over a volumetric approach are mentioned. First, two meshes, corresponding to range images taken from two different viewpoints, are registered to the same world coordinate system and then integrated. This process is repeated until all views have been integrated. The integration is straightforward unless the two triangle meshes overlap. The overlapped measurements are detected and the less confident triangles are removed based on their distance from and orientation relative to the camera viewpoint. After removing the overlapping patches, the meshes are seamed together to build a single 3D model. The model is incrementally updated after each new viewpoint is integrated. The color images are used as texture in the finished scene model. The results show that the approach is efficient for the integration of large, multimodal data sets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the spatial resolution enhancement and dynamic range extending technologies for a Computerized Airborne Multicamera Imaging System (CAMIS). CAMIS is a commercially available multispectral imaging system for diverse manned and unmanned aerial vehicles to fly along flexible paths and altitudes for a wide variety of applications. The current version of CAMIS consists of four spectral bands of progressive scan CCD video cameras with 782 X 576 square pixels each, giving a total of 1.82 million effective pixels. These cameras are synchronized and aligned in parallel with sub-pixel-accurate spatial offsets over a common field of view. A software procedure interpolates the original four-band 782 X 576 captures into 1564 X 1152 ones using a bilinear algorithm, and then performs geometric correction and band-to-band pixel registration. The result is a more precisely registered, spatial resolution enhanced multispectral image, sized 1540(H) X 1140(V) X 4 (Bytes). The CAMIS CCD cameras also feature a controllable electronic shutter, which permits the system to acquire a desirable range of signals by a computed exposure, and then bracket it with two additional up/down-stepped exposures into computer memory. The integrated data set of the multiple stepped exposures can effectively extend the dynamic range of the measurement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Pacific Northwest National Laboratory is involved in the design and development of algorithms to improve feature identification and detection using multisensor imagery. This research is funded jointly by the National Imagery and Mapping Agency (NIMA) and the U.S. Department of Energy. A process has been designed that exploits the spatial discontinuities in a scene as revealed by the reflectance variation in a given frequency. We believe that by mapping the discontinuities in a scene, man-made objects can be better distinguished from natural objects. The process involves the generation of a texture map for each of the multisensor data sets; this facilitates the fusion of data from different sources with different physical characteristics. The advantage of this approach is that texture seems to reduce image data to a common base. This common base becomes important when using data of variable quality, resolution, and geometry. Texture analysis has applicability to a wide variety of feature identification and extraction applications. This paper focus on demonstrating how the classification of texture maps derived from multisensor imagery can be used to automatically extract major roads from multisensor imagery, a requirement from NIMA under its comprehensive and integrated geospatial information generation strategy. Automatic/assisted road extraction is a particularly challenging task given the need for global coverage, accurate positioning, and sophisticated attribution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a system for extraction of information bands from imagery that combines properties of subband/wavelet decomposition and factor analysis to achieve uniform presentations of ground truth from a variety of sensor inputs. Some unique information-regularizing features of the system are invariance to scaling, sorting, and skewing of the input data, as well as robustness to blurring or sharpening, nonlinear intensity remapping, and to the differences between literal and non-literal input imagery. These features enhance both visual interpretability of an RGB color image and machine exploitability of regularized information bands.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The hardware device is known as a Local Fusion System (LFS) and is part of a larger modular condition monitoring solution. The LFS unit will be discussed in detail in this paper, describing how the design has evolved into a real hardware based condition-monitoring device that will be taken to market by one of the project partners. The device is responsible for learning the normal operating state of a machine component and identifying when it changes, in a process called novelty detection. To learn the normal operating state of a machine, the device learns a representation of the sensors that are connected to the unit (which could be of varying types and number) by using a novel neural network based fusion center that will be discussed in detail in this paper. The paper will also look pre- and post-processing issues in a limited hardware environment along with some example development data that if from a real-world machine.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a work currently in progress whose aim is to design, develop and evaluate a Multi-Agent Framework for Data Fusion (DFMAF). This is being done with the support of a battlefield surveillance demonstrator application, named TA-10. Through the following chapters, we will describe the benefits of using such a framework for data fusion problems. Firstly, we will briefly present the multi- agent research domain. Then, we will go into further details to describe DFMAF, the multi-agent framework designed to help solving data fusion problems. The appropriateness of DFMAF to data fusion problems will also be pointed out. Next, the implementation and use of DFMAF in the support application will be detailed as well as the assessment procedure followed. Finally, we will conclude and expose the future work which will be done.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The R&D group at Lockheed-Martin Canada has developed a target identifier function called ID Box. This computer program performs five main functions: first it transforms the sensor attribute input into a few contact ID declarations, second, it evaluates the association score between the contact declarations and the ID propositions of a current target track, third it performs attribute contact to track fusion using a modification of the Dempster-Shafer evidential theory, fourth the ID Box, using a platform library, produces a translator that unifies the information within track identity and the attribute input, and fifth, it manages the distribution of results to a system human computer interface. Our exhaustive platform library enables the ID Box to fuse attribute data from almost all kinds of sensor or information sources that may be found on large warships or patrol aircraft. These attributes are the radar cross section and the moving parts from surveillance radars, allegiance from interrogator systems, emitter composition from electronics support measure systems, spoken language from communication intercept systems, acoustical signature from sonar systems, propulsion types from IR detectors, dimensional data from imaging systems and other classification attributes from various systems or operators including dynamical parameters from positional trackers. This paper presents and describes the ID Box.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we offer an overview of design principles and propose a fusion process reference model that provides guidance for the design of data fusion systems. We incorporate a formal method approach to fusion system design and show the role of the psychology of the human/computer interface in the system design process. Data fusion is a complex, multi-faceted field that has evolved from a number of different disciplines. This disparate nature has lead to a largely bottom-up approach to data fusion system design where the components are constructed first and the system- level issues addressed afterwards. The result is an ad hoc, prototype driven philosophy which, we content, is neither efficient nor effective. We believe that design of data fusion systems needs to be given proper consideration, with a top-down approach that addresses system-level constraints first, thereby offering the possibility of re-usable, abstract structures. We offer an object-centered model of data fusion together with practical tools for studying and refining the model so that it can be useful in designing real data fusion systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a software package designed to explore data fusion area applied to different contexts. This tool, called CEPfuse (Conceptual Exploration Package for Data Fusion) provides a good support to become familiar with all concepts and vocabulary linked to data fusion. Developed with Matlab 5.2, it's also a good tool to test, compare and analyze algorithms. Although the core of this package is evidential reasoning and identity information fusion, it has been conceived to develop all the interesting part of the Multi-Sensor Data Fusion system. Actually, because we concentrate our research work on identity information fusion, the principal included algorithms are Dempster- Shafer rules of combination, Shafer-Logan algorithms for hierarchical structures, and several decision rules.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Methodology and stages of data processing in multichannel airborne radar imaging systems are considered. It is shown that data fusion in such systems requires special techniques, algorithms, and software for image processing and information retrieval. Some approaches and methods are proposed. The results are demonstrated for simulated and real images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study suggests a slight variation of the Dempster- Shafer theory using observation qualification in multi- sensor contexts. The uncertainty is placed on the rules instead of on sources. Thus, sensor's specialization is taken into account. By this approach, the masses are not directly attributed on the frame of discernment elements, but on the rules themselves that become the sources of knowledge, in the context of Dempster combining rule. It proposes then an approach for observation qualification in a multi-sensor context, as well as it suggests a new path for the delicate task of mass attribution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Dempster-Shafer (DS) evidential scheme is notoriously CPU-intensive and requires a truncation mechanism for real- time operation within a realistic Multi-Sensor Data Fusion (MSDF) system. A truncation scheme consisting of at least 4 parameters has previously been proposed and shown to work well in a limited set of naval and airborne scenarios. The present study considerably expands the realism of the generated airborne scenarios (by using a simulator with ground truth), expands the related platform and emitter databases, benchmarks the CPU loading, optimizes the values of the parameters by requiring faster convergence to a single correct platform identification, and computes relevant Measures of Performance. It also compares the truncated DS scheme's method of ordering the propositions for the MSDF operator to other schemes such as possibility theory, plausibility decision rules, and the Expected Utility Interval approach. Most parameters are found to vary the database size and independence of sensor reports. In particular the need to keep more propositions than previously reported is quantified and schemes to dynamically adjust this number are proposed. The relevant thresholds also have to be simultaneously decreased as the database size increases. Furthermore the minimum amount of ignorance has to be kept at an appropriate level to recover from countermeasures included in some scenarios, or from badly trained ship classifiers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The tracking system with Dempster-Shafer attribute association algorithm is studied. The aim of the paper is to study how the different parameters affect to the association accuracy. The results show that the proposed Dempster-Shafer attribute association algorithm is robust for parameter variations and thus for modeling errors. The simulations are done according to synthesized data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Bayesian and Dempster-Shafer Theory based methods are among the alternative algorithmic approaches to multisensor data fusion. The two approaches differ significantly and the extent of their applicability to data fusion is still being debated. This paper presents a Monte Carlo simulation approach for a comparative analysis of a Dempster-Shafer Theory based on a Bayesian multisensor data fusion in the classification task domain, including the implementation of both formalisms, and the results of the Monte Carlo experiments of this analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The main topic of this study concerns edge detection using information fusion approaches. Edge detection methods are based on first and second order local operations followed by a thresholding and edge tracking techniques. In this study, an intermediate fuzzy-evidential conceptual level is introduced between the gray level and edge detection symbolic information level. From the image, evidences concerning edges and regions are extracted using fuzzy membership functions as well as contextual information. The proposed approach can be decomposed into two steps: (1) application of evidential reasoning approach in order to compute a basic masse function, (2) edge detection process based on the use of an iterative algorithm, exploiting the contextual information and a belief masse function. Masse function computation is based on the use of edge and region fuzzy membership functions of each pixel in the analyzed scene. The main interest of this step is to consider membership functions as being observed evidences instead of image gray level values. The key idea of the second step is to use all the information about regions, edges and contextual data in the edge extraction process. Obtained results are encouraging and the proposed methodology is shown to be robust to different noisy environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Information fusion includes the integration of feature data, expert knowledge, and algorithms. For example, in automatic target recognition features of size, color, and motion can be fused to assess the combination of multi-modal information. A neurofuzzy fusion of features captures the multilevel language content of sensory information by fusing neural network data analysis with rule-based decision making. Additionally, the neurofuzzy architecture can effectively fuse coarse and fine abstracted feature data at the content level for decision making. In this paper, we investigate a multilevel neuro-fuzzy feature-based architecture for synthetic aperture radar target recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
While extract methods (e.g., jump-diffusion algorithms) for performing maximum a posteriori (MAP) target detection and recognition can be very complex and computationally expensive, it is often not clear how to develop effective and less complex suboptimal methods. Also, MAP algorithms typically generate hard decisions, but for fusion applications it would often be more desirable to have probabilities or confidence levels for a range of alternatives. In this paper, we consider the application of a framework called probability propagation in Bayesian networks. This framework organizes computations for iterated approximations to posterior probabilities, and has been used recently by communications researchers to derive very effective iterative decoding algorithms. In this paper, we develop a Bayesian network model for the problem of target detection and recognition, and use it in conjunction with Markov models for target regions to derive a probability propagation algorithm for estimating target shape and label probabilities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work addresses the often neglected, but important problem of Level 3 fusion or threat refinement. This paper describes algorithms for threat prediction and test results from a prototype threat prediction fusion engine. The threat prediction fusion engine selectively models important aspects of the battlespace state using probability-based methods and information obtained from lower level fusion engines. Our approach uses hidden Markov models of a hierarchical threat state to find the most likely Course of Action (CoA) for the opposing forces. Decision tress use features derived from the CoA probabilities and other information to estimate the level of threat presented by the opposing forces. This approach provides the user with several measures associated with the level of threat, including: probability that the enemy is following a particular CoA, potential threat presented by the opposing forces, and likely time of the threat. The hierarchical approach used for modeling helps us efficiently represent the battlespace with a structure that permits scaling the models to larger scenarios without adding prohibitive computational costs or sacrificing model fidelity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Specular reflections from environments cause uncertainties to ultrasonic sensor range data. In this paper, we examine the application of evidential method for data integration using the specially designed sensor model to overcome the problem. Dempster's rule of combination is used to fuse the sensor data to obtain the map defined on a 2D evidence grid. The sensor model tries to reduce the uncertainties caused by specular reflections with a filtering factor. Experimental results have shown the usefulness of this method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A process performance monitoring application for a high volume consumer electronics manufacturing process, by using multivariate information fusion techniques, has been developed. The purpose of the monitoring system is to identify test stations deviating in measurement values from other test stations and to provide early warning and identification of important process related disturbances, malfunctions or faults by extracting information by sensor measurements and by using knowledge about the process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new paradigm for machinery maintenance is emerging as preventive maintenance strategies are being replaced by condition-based maintenance. In condition-based maintenance, machinery is repaired or serviced only when an intelligent monitoring system indicates that the system cannot fulfill mission requirements. The implementation of such systems requires a combination of sensor data fusion, feature extraction, classification, and prediction algorithms. In addition, new system architectures are being developed to facilitate the reduction of wide bandwidth sensor data to concise predictions of ability of the system to complete its current mission or future missions. This paper describes the system architecture, data fusion, and classification algorithms employed in a distributed, wireless bearing and gear health monitoring system. The role and integration of prognostic algorithms--required to predict future system health--are also discussed. Examples are provided which illustrate the application of the system architecture and algorithms to data collected on a machinery diagnostics test bed at the Applied Research Laboratory at The Pennsylvania State University.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
System design methodology becomes a strategic activity in the industrial competition. Obtaining substantial reduction of time to market for complex and reliable products is one of the priorities for the manufacturers. Top down design, automated generation of architecture, co-design, virtual prototyping, etc. are already identified as research topics which have to be privileged. To be efficient, each theoretical contribution must be inserted in a global procedure of project management where complementary elements such as marketing, technico-economic survey, road-mapping, internal know-how,.. must be considered. In this context, this paper will present a design methodology starting from the requirement statement until the technical realization of the product, and applied to the design of a Time Stress Measurement Device for the observation of aeronautical mechanical systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Currently available commercial endoscopic graspers do not have any built-in sensors. Thus, the surgeon does not have any tactile feed-back to manipulate tissues safely. This paper reports on the design, fabrication and testing of a semiconductor micro-strain gauge endoscopic tactile sensor. This sensor consists of two semiconductor micro-strain gauge sensors which are positioned at the back face of an endoscopic grasper. It can measure the magnitude and the position of the applied force with only two sensing elements. The amplification system for the strain gauge is also designed and fabricated. It is shown that when a force is applied to the endoscopic grasper, the magnitude of the applied force can be visually seen in an LED device. The position of the applied force is obtained by combining the output from two insulated strain gauges. We have shown that the grasper operates in a wet environment. It exhibits high force sensitivity, large dynamic range, and good linearity. The sensor is integrated with a commercial endoscopic tool. The advantages and disadvantages of the system are also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study deals with different strategies of resource allocation in relation to tracking purposes. The integration of sensors for target tracking and resource management has been intensively investigated and several effective techniques have been developed. In a military context, resource allocation has two main purposes: precision of target tracking and sensor discretion, that means a number of limited pulses. Our different techniques provide solutions in our search of optimal strategy according to the requirements of allocation. Optimizing target pulse allocation amounts to minimize a given criteria, taking into account target vagueness. Our work consists in adapting this criteria to the tracking requirements by favoring a running mode (Stand-By or Tracking).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The present paper explores the dynamic level of information sensory fusion which is to be appropriate for hardware implementations. We associate to multitracking sensors their abstractions, being discrete time multihead state circuits. We presume sensors are to be independent from each other and there are no direct interfaces between them. The fusion is achieved by sensor-to-sensor track association which is controlled by the global state transition system. We investigate synchronous and asynchronous fusion models over common and distributed resource spaces and we compare the recognition capacities of these and some other models, like Turing Machines, stack automata etc. Then the fusioned circuits are applied to analyze arithmetical predicates, social games and an unsolved `Syracuse Conjecture'.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent research has demonstrated the benefits of a multiple hypothesis, multiple model sonar line tracking solution, achieved at significant computational cost. We have developed an adaptive architecture that trades computational resources for algorithm complexity based on environmental conditions. A Fuzzy Logic Rule-Based approach is applied to adaptively assign algorithmic resources to meet system requirements. The resources allocated by the Fuzzy Logic algorithm include (1) the number of hypotheses permitted (yielding multi-hypothesis and single-hypothesis modes), (2) the number of signal models to use (yielding an interacting multiple model capability), (3) a new track likelihood for hypothesis generation, (4) track attribute evaluator activation (for signal to noise ratio, frequency bandwidth, and others), and (5) adaptive cluster threshold control. Algorithm allocation is driven by a comparison of current throughput rates to a desired real time rate. The Fuzzy Logic Controlled (FLC) line tracker, a single hypothesis line tracker, and a multiple hypothesis line tracker are compared on real sonar data. System resource usage results demonstrate the utility of the FLC line tracker.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The report will highlight the final results of an Advanced Technology Demonstration effort for an enhanced all source fusion system recently developed at the Fusion Technology Branch, Air Force Research Laboratory (IFEA). It will describe an innovative approach of traditional fusion algorithms and heuristic reasoning techniques to significantly improve the detection, identification, location and tracking of mobile red, blue and gray components of the electronic environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The integration of information stemming from different sensors, cues, or modalities is among the most fundamental problems of perception in biological and artificial systems. Due to frequent changes in complex environments, the integration has to be adaptive. However, there usually is no teacher available to guide the adaptation. An agent has to figure out on his own which cues are reliable for a given task in the current context--self-organization is required. We have recently proposed a new integration scheme for such situations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Interim results are presented for a novel approach to finding and recognizing formations in moving-target indicator (MTI) radar data by applying Bayesian methods from image processing. The salient features of an MTI datum are a point location and a range rate along the line of sight to the radar. Each of these values is measured with a random location and range-rate error that has known statistical properties. Formations may have inhomogeneous vehicular density as directed by doctrine. Measured formations have variability in realizations resulting from the maneuvering of individual vehicles within the formation, partial obscuration, and other factors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The availability of multi-sensed data, especially in remote sensing, leads to new possibilities in the area of target recognition. In fact, the information contained in an individual sensor represents only one facet of the reality. The use of several sensors aims at covering different facets of real world objects. In this study, the targets to recognize are the planimetric features (i.e. roads, energy transmission lines, railroads and rivers). The sensors used are visible type satellite sensors (SPOT Panchromatic and Landsat TM) as well as radar satellites (Radarsat fine mode and ERS-1). Sensor resolutions range from 8 to 30 meters/pixel. In this study, the modeling is not limited, as it is generally the case, to the problem feature's reality, but to each sensor that will be used. Moreover, the decision space (here a 3D symbolic map) has to be modeled in the same way as the reality and sensors to lead to a coherent and uniform system. Each model is developed using an object- oriented approach. Each reality-object is defined through its radiometric, geometric and topologic feature. The sensor model objects are defined in accordance to image acquisition and definition, including the stereo image cases (for SPOT and Radarsat). Finally, the decision space objects define the resulting 3D symbolic map where, for instance, a pixel attributes contain classification information as well as position, accuracy, reality object's membership values, etc.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a closed form solution to the multiple platform simultaneous localization and map building (SLAM) problem. Closed form solutions are presented in both state space and information based forms. A key conclusion of this paper is that the information-state based form offers many advantages over the state space formulation in allowing the SLAM algorithm to be decentralized across multiple platforms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces techniques to deal with temporal aspects of fusion systems with redundant information. One of the challenges of a fusion system is that individual information is not necessarily announced at the same time. While some decisions (or features or data) are produced at a high sampling frequency, other decisions are generated at a much lower rate, perhaps only once during the operation of the system or only during certain operating conditions. This means that some information will be outdated when the actual information fusion task is performed. An event may have occurred in the meantime leading to a decision discord. We tackle this challenge by introducing the concept of `information or decision forgetting'. In other words, in case of an information discord, more recent information is evaluated with higher confidence than older information. Another difficulty is distinguishing between outliers and actual system changes. If tools perform their task at a high sampling frequency we can employ `decision smoothing'. That is, we factor out the occasional outlier and generally reduce the noise of the system. To that end, we introduce an adaptive smoothing algorithm that evaluates the system state and changes the smoothing parameter if it encounters suspicious situations, i.e., situations that might indicate a changed system state. We show the concepts introduced in the diagnostic realm where we aggregate the output of several different diagnostic tools.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A decision support tool has been developed that advises on approach and algorithm selection for automated data analysis systems. These approaches and algorithms include the standard data and information fusion methods. The tool comprises a database of fuzzy rules in disjunctive normal form. These rules were obtained by eliciting heuristic knowledge from established practitioners of data fusion. The input to the system consists of a variety of problem characteristics, some of which are fuzzy quantities and others are crisp values. Where fuzzy granulation was required this again was elicited from experts. The final fuzzy rule based system has been implemented as a Windows executable called Equity, which is freely available to download from the World Wide Web.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Today, sensor technology is widely used in both military and civilian domains and some approaches for target detection are also presented. In this paper, we discuss time and spectrum field feature extraction of the targets and pattern recognition by unattended ground sensor through Bayesian approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper concerns the possibilities of sea bottom imaging and altitude determining of each imaged point. The performances of new side scan sonars which are able to image the sea bottom with a high definition and are able to evaluate the relief with the same definition derive from an interferometric multisensor system. The drawbacks concern the precision of the numerical altitude model. One way to improve the measurements precision is to merge all the information issued from the multi-sensors system. This leads to increase the Signal to Noise Ratio (SNR) and the robustness of the used method. The aim of this paper is to clearly demonstrate the ability to derive benefits of all information issued from the three arrays side scan sonar by merging: (1) the three phase signals obtained at the output of the sensors, (2) this same set of data after the application of different processing methods, and (3) the a priori relief contextual information. The key idea the proposed fusion technique is to exploit the strength and the weaknesses of each data element in the fusion of process so that the global SNR will be improved as well as the robustness to hostile noisy environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.