Protecting panels in substations ensure the stable operation of electrical equipment, which is of great significant to the local electrical power system. In order to reduce the manual intervention and improve the efficiency, we propose a vision-based status recognition method for protecting panels which can be used in automatic inspection equipment. The approach is divided into three stages: pre-processing, switch localization and status recognition. During the preprocessing, the image is firstly wrapped into the front view with inverse perspective transformation (IPM), with the help of a set of four artificial auxiliary marks. Meanwhile, a ROI region is extracted from the wrapped image which discards most of the trivial context. Secondly, the gradient intensity feature is calculated to locate switches on a panel. After projecting the gradient intensity image horizontally and vertically, the distribution of the switches is determined by respectively analyzing the both directional projection curves. Finally, a SVM classifier is trained to recognize the status of the switches on a protecting panel. The input of the classifier is the gradient orientation feature extracted from a normalized single switch region, and the output is the connected or disconnected states of the switch. Experiments show that our approach has low time consumption and achieves an accurate recognition rate of 99%.
Estimating head pose of pedestrians is a crucial task in autonomous driving system. It plays a significant role in many research fields, such as pedestrian intention judgment and human-vehicle interaction, etc. While most of the current studies focus on driver’s-view images, we reckon that surveillant images are also worthy of attention since more global information can be obtained from them than driver’s-view images. In this paper, we propose a method for head pose estimation from surveillant images. This approach consists of two stages, head detection and pose estimation. Since the head of pedestrian takes up a very small number of pixels in a surveillant image, a two-step strategy is used to improve the performance in head detection. Firstly, we train a model to extract body region from the source image. Secondly, a head detector is trained to locate head position from the extracted body regions. We use YOLOv3 as our detection network for both body and head detection. For head pose estimation, we treat it as classification task of 10 categories. We use ResNet-50 as the backbone of the classifier, of which the input is the result of head detection. A serial of experiments demonstrate the good performance of our proposed method.
The tiny wires are the great threat to the safety of the UAV flight. Because they have only several pixels isolated far from the background, while most of the existing stereo matching methods require a certain area of the support region to improve the robustness, or assume the depth dependence of the neighboring pixels to meet requirement of global or semi global optimization method. So there will be some false alarms even failures when images contains tiny wires. A new stereo matching algorithm is approved in the paper based on double components model. According to different texture types the input image is decomposed into two independent component images. One contains only sparse wire texture image and another contains all remaining parts. Different matching schemes are adopted for each component image pairs. Experiment proved that the algorithm can effectively calculate the depth image of complex scene of patrol UAV, which can detect tiny wires besides the large size objects. Compared with the current mainstream method it has obvious advantages.
The performance of infrared focal plane array (IRFPA) is known to be affected by the presence of spatial fixed pattern
noise (FPN) that is superimposed on the true image. Scene-based nonuniformity correction (NUC) algorithms are widely
concerned since they only need the readout infrared data captured by the imaging system during its normal operation. A
novel adaptive NUC algorithm is proposed using the sparse prior that when derivative filters are applied to infrared
images, the filter outputs tends to be sparse. A change detection module based on results of derivative filters is
introduced to avoid stationary object being learned into the background, so the ghosting artifact is eliminated effectively.
The performance of the new algorithm is evaluated with both real and simulated imagery.
As an important tool to acquire information of target scene, infrared detector is widely used in imaging guidance field.
Because of the limit of material and technique, the performance of infrared imaging system is known to be strongly
affected by the spatial nonuniformity in the photoresponse of the detectors in the array. Temporal highpass filter(THPF)
is a popular adaptive NUC algorithm because of its simpleness and effectiveness. However, there still exists the problem
of ghosting artifact in the algorithms caused by blind update of parameters, and the performance is noticeably degraded
when the methods are applied over scenes with lack of motion. In order to tackle with this problem, a novel adaptive
NUC algorithm based on Gaussian mixed model (GMM) is put forward according to traditional THPF. The drift of the
detectors is assumed to obey a single Gaussian distribution, and the update of the parameters is selectively performed
based on the scene. GMM is applied in the new algorithm for background modeling, in which the background is updated
selectively so as to avoid the influence of the foreground target on the update of the background, thus eliminating the
ghosting artifact. The performance of the proposed algorithm is evaluated with infrared image sequences with simulated
and real fixed-pattern noise. The results show a more reliable fixed-pattern noise reduction, tracking the parameter drift,
and presenting a good adaptability to scene changes.
Scene-based nonuniformity correction algorithms are widely concerned since they only need the readout infrared data
captured by the imaging system during its normal operation. A system based on the neural network algorithm is designed
for real-time correction, using the framework of foreground and background. FPGA as the foreground performs the
regular nonuniformity correction and blind pixel detection. As the background, DSP monitors changes of the scene and
updates the correction parameters according to the analysis of the scene. In order to eliminate ghosting artifacts, an edgedirected
learning scheme is used. Via testing, the system is capable of tackling 25 frames per second. The performance of
the system is evaluated with real infrared imaging sequences. The results show a more reliable fixed-pattern noise
reduction, tracking the parameter drift, and presenting a good adaptability to scene changes.
Scene-based nonuniformity correction algorithms are widely concerned since they only need the readout infrared data
captured by the imaging system during its normal operation. However, there still exists the problem of ghosting artifact
in the algorithms, and their performance is noticeably degraded when the methods are applied over scenes with lack of
motion. In order to solve this problem, a novel adaptive scene-based NUC algorithm, with a design of foreground and
background, is presented. As the foreground, the neural network, using the adaptive learning rate rule, performs the
normal NUC. As the background, the block-based motion detection monitors changes of the scene and determines the
way of parameter update. The strength of the algorithm lies in its simplicity and low computational complexity. The
performance of the proposed algorithm is then evaluated with infrared image sequences with simulated and real fixedpattern
noise. The results show a more reliable fixed-pattern noise reduction, tracking the parameter drift, and presenting
a good adaptability to scene changes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.