The scale of traffic accidents is growing, and they can cause massive casualties and property damage. Video data recorded in dashcams is the most important carrier for recording incident moments. Analyzing this video data holds significant importance. Advances in deep convolutional neural networks (CNNs) have significantly propelled the progress of visual accident recognition in the past. Compared to traditional 2D CNNs, 3D CNNs can effectively capture spatial temporal features. However, 3D CNNs suffer from computational issues. To improve the accuracy of 2D CNNs for accident recognition, we incorporate TSMs into 2D CNNs so they can capture more effectively the surface and motion features of traffic accidents in dashcam video data. This enables simultaneous learning of spatial-temporal features in 2D CNNs. We also incorporate coordinate attention into the model, enhancing its capability for spatial-temporal feature learning and improving model performance. Finally, our model achieves higher accuracy in accident recognition than 3D CNNs on a re-organized public traffic video dataset.
In response to challenges such as the large number of parameters and high computational demands of vehicle appearance damage detection models, which hinder deployment on mobile devices, this paper presents a study focusing on lightweight and high-precision optimization of the YOLOv5s target detection algorithm. Specifically, we introduce the lightweight network into the YOLOv5s architecture to create a more efficient network. Furthermore, we integrate the attention mechanism to enhance feature extraction capabilities and employ knowledge distillation to improve algorithm accuracy. These enhancements aim to boost target detection performance. The experimental results illustrate that our optimized YOLOv5 algorithm achieves significant improvements in both speed and accuracy on the car damage dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.