Experimentation and research on safe and autonomous landing selection pipelines of UAVs are critical for their widespread deployment. Most of the methods of landing site selection are concentrated on control system-based methods that rely on feedback from sensors. For vision-based methods, extracting different surface parameters from images is vital of surface inclination is considered in this work. In this paper, a novel dataset consisting of images with different inclination label is prepared and tested with an end-to-end CNN based architecture. This LandUAVSafe dataset consists of RGB images with different inclination angles as ground truth labels. The dataset is created on ROS Gazebo using Iris UAV to capture different surfaces at several inclination angles at different heights. Three different CNN based architecture for inclination estimation based on Faster-RCNN, YOLOv3, and YOLOv8 has been experimented for classifying inclination between 0, 15, 30, 45 and 60 degrees. The experimental results depict significant improvement with the YOLOv8-based architecture.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.