CaLiV: LiDAR-to-Vehicle Calibration of Arbitrary Sensor Setups
In autonomous systems, sensor calibration is essential for safe and efficient navigation in dynamic environments. Accurate calibration is a prerequisite for reliable perception and planning tasks such as object detection and obstacle avoidance. Many existing LiDAR calibration methods require overlapping fields of view, while others use external sensing devices or postulate a feature-rich environment. In addition, Sensor-to-Vehicle calibration is not supported by the vast majority of calibration algorithms. In this work, we propose a novel target-based technique for extrinsic Sensor-to-Sensor and Sensor-to-Vehicle calibration of multi-LiDAR systems called CaLiV. This algorithm works for non-overlapping fields of view and does not require any external sensing devices. First, we apply motion to produce field of view overlaps and utilize a simple Unscented Kalman Filter to obtain vehicle poses. Then, we use the Gaussian mixture model-based registration framework GMMCalib to align the point clouds in a common calibration frame. Finally, we reduce the task of recovering the sensor extrinsics to a minimization problem. We show that both translational and rotational Sensor-to-Sensor errors can be solved accurately by our method. In addition, all Sensor-to-Vehicle rotation angles can also be calibrated with high accuracy. We validate the simulation results in real-world experiments. The code is open-source and available on https://github.com/TUMFTM/CaLiV.
💡 Research Summary
The paper introduces CaLiV, a novel calibration framework that simultaneously solves Sensor‑to‑Sensor (S2S) and Sensor‑to‑Vehicle (S2V) extrinsic calibration for multi‑LiDAR setups without requiring overlapping fields of view or any external measuring devices. The authors exploit vehicle motion to create temporary overlaps: by driving a curved trajectory, each LiDAR observes a common calibration target at different time steps even though the sensors never see the target simultaneously. Vehicle poses are estimated with an Unscented Kalman Filter that fuses IMU, GPS, velocity and orientation data, providing a robust, noise‑aware trajectory estimate.
Point clouds from each LiDAR are first roughly aligned using the initial (possibly erroneous) LiDAR‑to‑vehicle transforms and the estimated vehicle pose. Ground points are removed with RANSAC, leaving only the target points. These pre‑aligned clouds are then fed into GMMCalib, a Gaussian‑Mixture‑Model based registration method that aligns all clouds to a latent “calibration frame” C rather than performing pairwise ICP. The result is a set of transformations C_Li^a that map each LiDAR’s scan at time i into the common frame.
The core of CaLiV is a non‑linear optimization that minimizes the inconsistency between the estimated calibration‑frame‑to‑reference‑frame transformations across all sensor‑time pairs that actually observed the target. By defining a set S of observable pairs, the authors formulate a cost function that sums the pairwise errors e(Ĉ_R_i^a, Ĉ_R_j^b). The unknowns are the true vehicle‑to‑LiDAR transform for a reference LiDAR (V_L2) and the S2S transform between the LiDARs (L2L1). An iterative scheme first solves for both simultaneously, then refines S2V calibration with the S2S transform fixed, leading to higher accuracy. The optimization is performed with a Gauss‑Newton/Levenberg‑Marquardt solver.
Experiments are conducted in simulation and on a real vehicle. In the worst‑case scenario, two LiDARs face opposite directions, providing no direct overlap. Simulated results show average rotation errors below 0.08° and translation errors under 2 cm, outperforming state‑of‑the‑art SLAM‑based and hand‑eye calibration methods. Real‑world trials confirm comparable performance, demonstrating that the method works under realistic sensor noise and vehicle dynamics.
Key contributions are: (1) the first target‑based S2V calibration method for multi‑LiDAR rigs that does not rely on external sensors; (2) the ability to handle non‑overlapping fields of view by leveraging GMMCalib’s robustness to different viewpoints; (3) a motion‑based pose estimation pipeline using a UKF that integrates multiple onboard sensors; (4) open‑source release of the full implementation.
Limitations include the need for a sufficiently rich vehicle trajectory to generate enough overlapping observations, dependence on a calibration target that is large enough for reliable GMM registration, and the current focus on LiDAR‑to‑LiDAR setups (extension to camera‑LiDAR or other sensor combinations is left for future work). Overall, CaLiV provides a practical, high‑accuracy solution for calibrating complex LiDAR arrays on autonomous platforms, improving sensor fusion reliability without additional hardware overhead.
Comments & Academic Discussion
Loading comments...
Leave a Comment