We develop a 3x3 -channel Bionic Compound Eyes Imaging System (BCEIS, which is composed of an optical system and a mechanical system) and apply it to target positioning. First, we analyze the overlapping condition based on the imaging model of the BCEIS. Then, considering the relation between the pixel coordinates of image points and world coordinate of the target is nonlinear, we fabricate three well-designed general regression neural networks (GRNNs) to position the target under three conditions where the FOV of four channels, six channels and nine channels overlaps at the same time respectively (the image point of each channel is obtained under three conditions). In order to overcome limitations of the GRNN, we sample a group of image points which cover the FOV of the system under above three conditions to train the network, and then utilize the testing set to verify the reliability of the three GRNNs. The experimental result shows that the positioning accuracy is the highest in the area where the FOV of nine channels overlaps simultaneously, which is followed by the accuracy in the area where the FOV of six channels overlaps at the same time. The positioning accuracy is the lowest in the area where the FOV of four channels overlaps simultaneously. Furthermore, we find that GRNN performs better both in positioning accuracy and time consumption when compared with BP network. Adopting the GRNN to position the target provides a new way in applications such as object tracking, robot navigation and etc.
An objective method to calibrate the spatial color resolution (SCR) of a color charge-coupled device (CCD) is provided. The experimental prototype contains a target generation system, a test platform, and an analytical processing system. The target generation system creates colors with an adjustable light source and two complementary resolution panels (each one has target bars). The test platform aims at creating images through adjusting parameters of a light source of the target generation system. And the analytical processing system is used to process images to evaluate the SCR of a color CCD. We focus on the third module and utilize the minimum detectable color difference (MDCD) and the minimum resolvable color difference (MRCD) to evaluate the SCR. In the process of data collecting, we first set the two channels (one is foreground channel and the other is background channel) of a generation system the same color and then gradually change wavelength of the foreground channel until the foreground image is slightly visible. As it is difficult to let the ratio of detectable pixels of target bars just meet the requirements of the MDCD and the MRCD by only adjusting the wavelength, we adopt the general regression neural network to estimate the two indicators and the maximum estimation error of which are within 6%. In order to deal with more complex scenarios with brightness and saturation change, an image augmentation network (a modified generative adversarial network) is applied to generate synthetic images, which cannot be easily captured by our prototype due to limits of the light source. The experimental results show that the estimation error of MDCD and MRCD is decreased to almost 1%. The method is a human-eye independent way and performs well in selecting the right kind of a color CCD, which guarantees the reliability and security of the visual detection and recognition system.
We provide a new method to simulate the process of tracking the noncooperative object that moves beyond visual range with a photon-counting laser ranging system. Based on fundamentals of photon-counting laser ranging techniques and parameters of the experimental prototype, we generate echo events according to their probability. Then, we accumulate the echo data in a certain period of time and accurately extract the object’s trajectory with mean-shift and random sample consensus algorithms. Depending on the trajectory during the accumulation period, we predict the relative movement of the object in succeeding cycles by using self-tuning α−β filtering and carefully pick out photon echo signals and apply the polynomial fitting to them to compute the trajectory of the object. The simulation shows that the error between the theoretical trajectory and the extracted trajectory is decreasing all the time, which suggests that we can track the object precisely as the time goes by. The simulation in this paper provides a new way for applications like satellite orientation, identification, troubleshooting, etc.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.