A high-precision extrinsic calibration is the underlying premise of the accurate perception of light detection and ranging (LiDAR) and camera systems commonly used in the autonomous driving industry. We propose a coarse-to-fine strategy to get rigid-body transformation between solid-state LiDAR with non-repetitive scanning and a RGB camera system using a chessboard as the calibration target. This method exploits the reflectance intensity characteristics of the LiDAR point cloud, which exhibit the distinct distribution in white and black blocks of chessboard. In the coarse calibration step, a reflectance intensity Gaussian mixture model was used to extract the unicolor block point cloud from the chessboard point cloud. Therefore, the initial estimate of the extrinsic parameter was obtained by aligning the corners in the point cloud and calculating the centroid of the unicolor block point cloud and corners in the image. In the refinement step, we extracted points on the border of each block as LiDAR features and designed an iterative optimization algorithm to align the intensity of LiDAR features with grayscale features in the image. This method utilizes the intensity information and compensates for corner errors in the point cloud due to reflectance intensity binarization. The results of the comparative experiment revealed that the proposed method outperformed existing methods in terms of accuracy. Experiments based on simulations and real-world conditions revealed that the proposed algorithm demonstrated a high accuracy, robustness, and consistency.
The composition of multiple-layer Light Detection And Ranging (LiDAR) and camera is commonly used in autonomous perception systems. Complementary information of these sensors is instrumental in the reliable surrounding perception. However, it is a difficult work for obtaining the extrinsic parameters between LiDAR and camera, which must be known for some perception algorithms. In this study, we present a method, using only three 3D-2D correspondences to compute the extrinsic parameters between Velodyne-VLP16 LiDAR and monocular camera. The procedure is that 3D and 2D features are extracted respectively from the point cloud and image of a custom calibration target and then the extrinsic parameters are obtained based on these features by the perspective-3-point (P3P) algorithm. Outliers with minimum energy at geometrical discontinuities of target are used as control points for extracting key features in LiDAR point cloud. Moreover, a novel method is presented to distinguish the correct solution from multiple P3P solutions. The method depends on conic shape discrepancies in spaces of the different solutions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.