DiffPoseNet: Direct Differentiable Camera Pose Estimation

Abstract

Current deep neural network approaches for camera pose estimation rely on scene structure for 3D motion estimation, but this decreases the robustness and thereby makes cross-dataset generalization difficult. In contrast, classical approaches to structure from motion estimate 3D motion utilizing optical flow and then compute depth. Their accuracy, however, depends strongly on the quality of the optical flow. To avoid this issue, direct methods have been proposed, which separate 3D motion from depth estimation but compute 3D motion using only image gradients in the form of normal flow. In this paper, we introduce a network NFlowNet, for normal flow estimation which is used to enforce robust and direct constraints. In particular, normal flow is used to estimate relative camera pose based on the cheirality (depth positivity) constraint. We achieve this by formulating the optimization problem as a differentiable cheirality layer, which allows for end-to-end learning of camera pose. We perform extensive qualitative and quantitative evaluation of the proposed DiffPoseNet's sensitivity to noise and its generalization across datasets. We compare our approach to existing state-of-the-art methods on KITTI, TartanAir, and TUM-RGBD datasets

Cite

Text

Parameshwara et al. "DiffPoseNet: Direct Differentiable Camera Pose Estimation." Conference on Computer Vision and Pattern Recognition, 2022. doi:10.1109/CVPR52688.2022.00672

Markdown

[Parameshwara et al. "DiffPoseNet: Direct Differentiable Camera Pose Estimation." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/parameshwara2022cvpr-diffposenet/) doi:10.1109/CVPR52688.2022.00672

BibTeX

@inproceedings{parameshwara2022cvpr-diffposenet,
  title     = {{DiffPoseNet: Direct Differentiable Camera Pose Estimation}},
  author    = {Parameshwara, Chethan M. and Hari, Gokul and Fermüller, Cornelia and Sanket, Nitin J. and Aloimonos, Yiannis},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2022},
  pages     = {6845-6854},
  doi       = {10.1109/CVPR52688.2022.00672},
  url       = {https://mlanthology.org/cvpr/2022/parameshwara2022cvpr-diffposenet/}
}