Relative Pose Estimation and Fusion of Omnidirectional and LiDAR Cameras
Abstract
This paper presents a novel approach for the extrinsic parameter estimation of omnidirectional cameras with respect to a 3D Lidar coordinate frame. The method works without specific setup and calibration targets, using only a pair of 2D-3D data. Pose estimation is formulated as a 2D-3D nonlinear shape registration task which is solved without point correspondences or complex similarity metrics. It relies on a set of corresponding regions, and pose parameters are obtained by solving a small system of nonlinear equations. The efficiency and robustness of the proposed method was confirmed on both synthetic and real data in urban environment.
Cite
Text
Tamas et al. "Relative Pose Estimation and Fusion of Omnidirectional and LiDAR Cameras." European Conference on Computer Vision Workshops, 2014. doi:10.1007/978-3-319-16181-5_49Markdown
[Tamas et al. "Relative Pose Estimation and Fusion of Omnidirectional and LiDAR Cameras." European Conference on Computer Vision Workshops, 2014.](https://mlanthology.org/eccvw/2014/tamas2014eccvw-relative/) doi:10.1007/978-3-319-16181-5_49BibTeX
@inproceedings{tamas2014eccvw-relative,
title = {{Relative Pose Estimation and Fusion of Omnidirectional and LiDAR Cameras}},
author = {Tamas, Levente and Frohlich, Robert and Kato, Zoltan},
booktitle = {European Conference on Computer Vision Workshops},
year = {2014},
pages = {640-651},
doi = {10.1007/978-3-319-16181-5_49},
url = {https://mlanthology.org/eccvw/2014/tamas2014eccvw-relative/}
}