Computer vision has been an active field of research for many decades; it has also become widely used for airborne
applications in the last decade or two. Much airborne computer vision research has focused on navigation for Unmanned
Air Vehicles; this paper presents a method to estimate the full 3D position information of a UAV by integrating visual
cues from one single image with data from an Inertial Measurement Unit under the Kalman Filter formulation. Previous
work on visual 3D position estimation for UAV landing has been achieved by using 2 or more frames of image data with
feature enriched information in the image; however raw vision state estimates are hugely suspect to image noise. This
paper uses a rather conventional type of landing pad with visual features extracted for use in the Kalman filter to obtain
optimal 3D position estimates. This methodology promises to provide state estimates that are better suited for guidance
and control of a UAV. This also promise autonomous landing of UAVs without GPS information to be conducted. The
result of this implementation tested with flight images is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.