Vehicle tracking can be done automatically based on data from a distributed sensor network. The determination of vehicle behavior must currently be done by humans. Behaviors of interest include searching, attacking and retreating. The purpose of this paper is to show an approach for the automatic interpretation of vehicle behaviors based on data from distributed sensor networks. The continuous dynamics of the sensor network are converted into symbolic dynamics by dividing its phase space into hypercubes and associating a symbol with each region. When the phase-space trajectory enters a region, its corresponding symbol is emitted into a symbol stream. Substrings from the stream are interpreted as a formal language defining the behavior of the vehicle. The formal language from the sensor network is compared to the languages associated with known behaviors of interest. Techniques for performing quantitative comparisons between formal languages are presented. The abstraction process is shown to be powerful enough to distinguish two simple behaviors of a robot based on data from a pressure sensitive floor.
This paper presents distributed adaptation techniques for use in wireless sensor networks. As an example application we consider data routing by a sensor network in an urban terrain. The adaptation methods are based on ideas from physics, biology, and chemistry. All approaches are emergent behaviors in that they: (i) perform global adaptation using only locally available information, (ii) have strong stochastic components, and (iii) use both positive and negative feedback to steer themselves. We analyze the approaches’ ability to adapt, robustness to internal errors, and power consumption. Comparisons to standard wireless communications techniques are given.
Most contemporary real-time distributed systems minimize computation energy consumption to reduce total system energy. However, wireless sensor networks expend a significant portion of total energy consumption on communication energy costs. Wireless transmission energy depends directly on the desired transmission distance, so the energy required for communication between neighboring nodes is less than that for distant ones. Mobile nodes can therefore reduce transmission energy costs by approaching one another before communicating. The penalty for energy reduction through locomotion is an increase in time consumed, thus care must be taken to meet system deadlines. We combine locomotion as a communication energy reduction strategy with well-known computation energy reduction schemes and demonstrate the resultant energy savings for representative systems.
This paper describes an efficient system for the registration of Range maps in the compressed wavelet domain. A feature-based approach is taken to reduce the computational burden involved in the registration search process. Efficient and accurate registration of range or depth maps is an important problem in military, medical and manufacturing applications. The techniques described can be applied not only to 3D data but also to 2D images. We illustrate registration results on a range of real scenes.
The visualization of a scene of murky atmospheric conditions is improved by fusing multiple images. A key feature of this system is the use of the wavelet domain in the fusion process. Many possible fusion formulas in this domain exist and to find the `best' formula, we formulate an optimization problem. We assume a set of training data consisting of a sequence of images with the presence of atmospheric effects and the corresponding image with no atmospheric effects present (ground truth). Next, we perform a search over the parameter space of our `generic fusion formula' attempting to minimize the error between the original ground truth image and the image created by fusing the noisy data. Using the resulting `best' fusion formula, we have created a system for pixel level fusion. Experimental results are shown and discussed. Possible applications of this system including processing of outdoor security system data, filters for outdoor vehicle image data and use in heads-up displays.
Many sensor fusion systems combine redundant inputs to increase information reliability. In spite of this, few studies show how to choose redundant sensors for these systems. We find sensor configurations that minimize system cost while ensuring system dependability. Dependability is the generic term for system reliability and availability. Given many types of sensors, all fulfilling system operational requirements, but with different dependability and per item cost, heuristic search methods are used to find minimum cost configurations. Our main contributions are deriving the optimization problem, showing the search can be limited to a multidimensional surface, deriving a fitness function, and providing an efficient algorithm for computing dependability bounds. Two heuristics, genetic algorithms and simulated annealing, are proposed as methods. Experimental results show cost savings of up to 20% compared to systems with only one component type.
This paper explores an image processing application of optimization techniques which entails interpreting noisy sensor data. The application is a generalization of image correlation; we attempt to find the optimal gruence which matches two overlapping gray-scale images corrupted with noise. Both taboo search and genetic algorithms are used to find the parameters which match the two images. A genetic algorithm approach using an elitist reproduction scheme is found to provide significantly superior results. The presentation includes a graphic presentation of the paths taken by tabu search and genetic algorithms when trying to find the best possible match between two corrupted images.
KEYWORDS: Sensors, Reliability, Sensor fusion, Sensor technology, Computing systems, Algorithm development, Defense and security, Data processing, Chemical elements, 3D metrology
Multisensor fusion is a method which is of current importance for improving sensor reliability. Because individual sensors are prone to transient errors, mechanical failures, and noise, as well as being of limited accuracy, it is advisable to fuse readings from many heterogeneous sensors. This allows several different sensor technologies to be used together to measure the value of a physical variable. Using a multitude of sensor technologies makes the overall system less sensitive to the failures of any one technology. Unfortunately, it is a non-trivial task to glean the best interpretation from a large number of partially contradictory sensor readings. A number of methods exist for finding the best approximate match for this type of redundant, but possibly faulty, data. This paper presents a new algorithm which finds the best possible interpretation of partially contradictory sensor readings, some of which are incorrect, that contain data of greater than two dimensions. Currently available algorithms return interpretations which are larger than the optimal. This has been done to avoid excessive computational complexity. The algorithm presented here is based on data structures from computational geometry and provides the smallest possible region satisfying the constraints of the problem with a reasonable computational complexity.
Multisensor fusion is a method for improving sensor reliability. Because individual sensors are prone to errors and noise, it is advisable to fuse readings from many sensors. This allows several technologies to be used to measure the value of a variable. Unfortunately it is a non- trivial task to glean the best interpretation from a large number of partially contradictory sensor readings. A number of methods exist for finding the best approximate match for this type of redundant, but possibly faulty, data. This paper states the approximate matching problem and its application to multisensor fusion. Existing algorithms and recent developments are explained along with their performance and assumptions. A new algorithm is presented which unifies previous research. Appropriate applications and potential bottlenecks are discussed.
Recently developed algorithms in automation theory are often difficult to compare correctly since systems must interact with a changing environment. All algorithms are therefore dependent on sensor inputs which are notoriously subject to noise and errors. Proper comparison must be platform independent, but must also take sensor reliability problems into account. We have developed, and are using, a software simulator for comparative evaluation of robotics algorithms. The simulator uses an abstract sensor model which allows evaluation of the algorithms with various sensor reliability parameter values. By applying equivalent algorithms to a large number of randomly generated scenarios it is possible to make valid quantitative comparisons of average performance. This information is complementary to asymptotic time complexity measure which is the most common tool for algorithm comparison. Information is gathered which allows comparison according to criteria chosen by the user, such as distance traveled, number of sensor scans taken, or even collisions with obstacles in the environment. A preliminary discussion of a system capable of quantitative comparison of several algorithms for robot navigation in unknown terrains is presented. This system is in the final stages of acceptance testing, and promises to provide a testbed for future robot navigation research.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.