RAVES-Calib: Robust, Accurate and Versatile Extrinsic Self Calibration Using Optimal Geometric Features
In this paper, we present a user-friendly LiDAR-camera calibration toolkit that is compatible with various LiDAR and camera sensors and requires only a single pair of laser points and a camera image in targetless environments. Our approach eliminates the need for an initial transform and remains robust even with large positional and rotational LiDAR-camera extrinsic parameters. We employ the Gluestick pipeline to establish 2D-3D point and line feature correspondences for a robust and automatic initial guess. To enhance accuracy, we quantitatively analyze the impact of feature distribution on calibration results and adaptively weight the cost of each feature based on these metrics. As a result, extrinsic parameters are optimized by filtering out the adverse effects of inferior features. We validated our method through extensive experiments across various LiDAR-camera sensors in both indoor and outdoor settings. The results demonstrate that our method provides superior robustness and accuracy compared to SOTA techniques. Our code is open-sourced on GitHub to benefit the community.
💡 Research Summary
The paper presents “RAVES-Calib,” a novel, fully automatic, and targetless toolkit for calibrating the extrinsic parameters between LiDAR and camera sensors. The core challenge it addresses is achieving high calibration accuracy comparable to target-based methods while offering the convenience and flexibility of targetless operation in arbitrary environments.
The methodology is built upon four key pillars. First, it eliminates the need for any manual initial guess. By leveraging the state-of-the-art Gluestick deep learning model, the system automatically establishes 2D-3D correspondences of both point and line features between a camera RGB image and a LiDAR intensity image generated from the point cloud. These correspondences are then used to compute a robust initial transformation estimate using a combination of RANSAC, PnP, and point-to-line distance minimization.
Second, and most significantly, the paper introduces a novel heuristic-free algorithm for optimal feature selection. The authors perform a quantitative analysis of how feature distribution impacts calibration accuracy. By analyzing the Hessian matrix of the optimization problem, they derive a quantitative metric that evaluates each feature’s contribution to estimating the six degrees of freedom (6DOF) parameters. This allows the system to weight or filter features adaptively, effectively screening out poorly distributed or “inferior” features that could degrade the final result. This moves the calibration process beyond simple feature matching to an intelligent, optimization-aware feature selection stage.
Third, to ensure robustness across diverse scenarios, the system employs multiple types of geometric features: point features and line features from the intensity image correspondence, as well as depth-continuous edge features extracted directly from the 3D point cloud. This multi-feature approach compensates for situations where a single feature type is insufficient or unevenly distributed.
Finally, the refined extrinsic parameters are obtained through a nonlinear optimization process that minimizes a combined cost function incorporating reprojection errors for points and point-to-line distances for line features, using the optimally selected feature subset.
The proposed toolkit was extensively validated with various LiDAR (including Velodyne, Ouster, Livox) and camera pairs in both indoor and outdoor environments. Experimental results demonstrate that RAVES-Calib outperforms existing state-of-the-art targetless methods in terms of both accuracy and robustness, even with large initial misalignments. Notably, it achieves accuracy on par with traditional target-based methods while requiring only a single, targetless scan-image pair. The authors have open-sourced the code to facilitate further research and application in the community, positioning RAVES-Calib as a practical and powerful solution for sensor fusion in autonomous driving, robotics, and mapping.
Comments & Academic Discussion
Loading comments...
Leave a Comment