Batch Differentiable Pose Refinement for In-the-Wild Camera/LiDAR Extrinsic Calibration
Abstract
Accurate camera to LiDAR (Light Detection and Ranging) extrinsic calibration is important for robotic tasks carrying out tight sensor fusion — such as target tracking and odometry. Calibration is typically performed before deployment in controlled conditions using calibration targets, however, this limits scalability and subsequent recalibration. We propose a novel approach for target-free camera-LiDAR calibration using end-to-end direct alignment which doesn’t need calibration targets. Our batched formulation enhances sample efficiency during training and robustness at inference time. We present experimental results, on publicly available real-world data, demonstrating 1.6cm/$0.07^{\circ}$ median accuracy when transferred to unseen sensors from held-out data sequences. We also show state-of-the-art zero-shot transfer to unseen cameras, LiDARs, and environments.
Cite
Text
Fu and Fallon. "Batch Differentiable Pose Refinement for In-the-Wild Camera/LiDAR Extrinsic Calibration." Conference on Robot Learning, 2023.Markdown
[Fu and Fallon. "Batch Differentiable Pose Refinement for In-the-Wild Camera/LiDAR Extrinsic Calibration." Conference on Robot Learning, 2023.](https://mlanthology.org/corl/2023/fu2023corl-batch/)BibTeX
@inproceedings{fu2023corl-batch,
title = {{Batch Differentiable Pose Refinement for In-the-Wild Camera/LiDAR Extrinsic Calibration}},
author = {Fu, Lanke Frank Tarimo and Fallon, Maurice},
booktitle = {Conference on Robot Learning},
year = {2023},
pages = {1362-1377},
volume = {229},
url = {https://mlanthology.org/corl/2023/fu2023corl-batch/}
}