KEYWORDS: Particle filters, Target detection, Detection and tracking algorithms, Signal to noise ratio, Electronic filtering, Image processing, Surveillance systems
Particle filtering is a key technique for moving targets detection and tracking in the field of remote surveillance system and air defense systems. Moving targets can be tracked by particle filter without registration. However, standard particle filtering cannot suite for high-precision tracking and track small dim moving targets occupying a few pixels in image, having low signal-to-noise ratio (SNR) and always flicking. To solve this problem, an improved algorithm is proposed to achieve detection and tracking for small dim moving targets. In the new algorithm, the prediction process of particle filter is improved by a linear regression method. It is applicable to the sequential images where the moving targets become smaller and dimmer gradually. Small dim targets can be detected and tracked directly with low SNR and without registration. The trajectory of the moving target is learned automatically through the past state of the moving target, and the trajectory is used for generating the importance density function. The importance density function is used as the prior probability in particle filter to sample and update particles. Through continuously learning and updating the trajectory of the moving target, the tracking accuracy is improved. Experimental results show that the tracking accuracy of the moving targets is greatly improved, and small dim moving targets can be detected and tracked without registration.
High-resolution (HR) remote sensing images are characterized by rich and detailed ground object information with more complex structures of the ground object which make the interference information is more difficult to process. It has always been the focus of domestic and foreign researchers that how to obtain more accurate and higher quality ground object information from these images. The GF-4, the world's first geostationary orbit with high spatial resolution remote sensing satellite, can provide high temporal resolution, large width and 50m pixel resolution of remote sensing data by using area array imaging technology. However, the GF-4 image is a medium resolution and low resolution (LR) image data with relatively vague details of ground objects and not obvious relationships between objects which limit the acquisition of the ground object information to some extent. Therefore, in this paper, we analyze the influence of various factors in the imaging process and construct an image degradation model according to the characteristics of GF-4 satellite images. We adopted the super resolved (SR) method based on Mixed sparse representations (MSR) to increase the spatial resolution of the GF-4 image by twice as much, which not only enriched the detailed information of the image, but also improved the image quality. For the results of SR of GF-4 imagery, we adopted the Maximum Likelihood Classification (MLC) method to perform image classification test and result verification. The experimental area selected in this paper is Yantai City, Shandong Province, China, the LANDSAT 8 OLI data is used as a training sample to calculate the overall accuracy and Kappa coefficient after classification. The results show that the overall accuracy of the superreconstructed result data is 40% higher than that of the source image data from GF-4, especially when the spectral characteristics of the ground objects are obviously different, the accuracy is more obvious. The Kappa coefficient increased 0.4, the extracted outline is more complete and the classification details are more refined.
Accurate onboard-camera pose estimation is one of the challenges of satellite systems. Improving remote sensing camera pose accuracy never ceases for various applications, including autonomous navigation, 3D reconstruction and continuous city modeling. 3D products of very high spatial accuracy can be created with 3m@SE90 (3 meters error with SE90, which is the abbreviation for Spherical Error 90%) with leading companies, for example, Vricon company in USA. Aiming at the problem of the accuracy of pose estimation, a new method from captured images with the reference 3D products is presented in this paper. Distinguished from the existing methods, our method employs the 3D model to calibrate the pose of the remote sensing camera. Firstly, the high-precision 3D digital surface model is projected onto image space using a virtual calibrated camera. Then, the camera motion parameters of the neighboring moment are estimated by the information of the adjacent frames. This process consists of three steps: i) feature extraction; ii) similarity measurement, and feature matching; iii) camera pose estimation and verification. Finally, the camera pose of the captured image can be determined. Experiment results were compared with the initial exterior orientation parameters used to achieve perspective transformation of the captured images. Furthermore, the method proposed in this study is tested by hardware experiment which simulates remote sensors and platform. Results showed that acceptable accuracy of camera pose can be achievable by using the proposed approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.