In the process of robot target recognition, depth cameras and LIDAR are often used as extended sensors. The data collected by both have their own advantages and disadvantages, LIDAR can obtain accurate position information but not the morphology information of the object, while depth cameras can obtain abundant image information but cannot obtain accurate three-dimensional position information of objects. To better achieve the robot's recognition of specific targets, we fused the two information sources and obtained point cloud data with RGB information. To solve the problem of inconsistency in the coordinate system of the sensor system, we identify a specific calibration plate and propose an ellipse identification method for fault point clouds. To solve the problem of LIDAR point clouds sparsity, we compare the improved completion algorithm based on computer graphics and the completion algorithm based on deep learning, we figure out that the method of computer graphics meets our expectations better based on the fact that it have a larger number of point clouds. Finally, The accuracy of the method based on computer graphics is measured, and the errors of length and width are only 0.003% and 3.64%, which proves that the proposed method can meet the fusion accuracy in small indoor environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.