With the continuous development of artificial intelligence, the use of deep learning to achieve intelligent space object detection has become a new research trend. Space-based observation platforms are affected by the space environment with many problems such as small scale of space object, large amount of noise, low recognition and little extractable information. To address the above issues, an improved fully convolutional one-stage object detection (FCOS) model based on adaptive feature texture enhancement and receptive field adjustment is proposed. To address the problem of pixel smoothing and detail loss caused by up sampling in convolutional neural networks (CNN), this paper proposes a texture detail enhancement module (TDEM), which is based on sub-pixel convolution to achieve effective scaling of the feature map by automatically learning the interpolation function and enhance the correlation between the pixels of the image while suppressing irrelevant features. In addition, in order to obtain more dense features and appropriate receptive fields, an adaptive receptive field adjustment module (ARFAM) is proposed by using densely connected dilated convolution and attention mechanism to enrich the contextual information around the object and improve the detection capability of the model. This paper constructs the SDM dataset, which contains 6842 images and three categories of satellites, debris, and meteorites. The experimental results on the SDM dataset show that our method achieves the mAP of 73.9%, which illustrates detection performance is significantly better than the mainstream algorithms.
Drone object detection in low-altitude airspace plays an essential role in many practical applications, such as security and airspace monitoring. Despite the remarkable progress made by many methods, drone object detection still remains challenging due to the complex background and huge differences in scales of drones. To address the above issues, an improved fully convolutional one-stage object detection (FCOS) model based on adaptive weighted feature fusion (AWFF) module is proposed for multiscale drone object detection in complex background. By learning the spatial relevance of feature maps at each scale and improving the scale invariance of features based on the channel attention mechanism, AWFF module could adaptively fuse the features of adjacent scale. In addition, a receptive field enhancement module is designed to reduce the information loss in the feature fusion process. Extensive experiments are conducted to evaluate the effectiveness of the proposed module and method on the constructed low-altitude drone dataset, which concludes that the mean average precision of the AWFF-FCOS is increased by 2.1% compared with the baseline method. And extensive ablation experiments further demonstrate that the proposed AWFF module and REF module could be integrated into the state-of-the-art method to improve the performance of drone object detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.