Variational State-Space Models for Localisation and Dense 3D Mapping in 6 DoF

Abstract

We solve the problem of 6-DoF localisation and 3D dense reconstruction in spatial environments as approximate Bayesian inference in a deep state-space model. Our approach leverages both learning and domain knowledge from multiple-view geometry and rigid-body dynamics. This results in an expressive predictive model of the world, often missing in current state-of-the-art visual SLAM solutions. The combination of variational inference, neural networks and a differentiable raycaster ensures that our model is amenable to end-to-end gradient-based optimisation. We evaluate our approach on realistic unmanned aerial vehicle flight data, nearing the performance of state-of-the-art visual-inertial odometry systems. We demonstrate the applicability of the model to generative prediction and planning.

Cite

Text

Mirchev et al. "Variational State-Space Models for Localisation and Dense 3D Mapping in 6 DoF." International Conference on Learning Representations, 2021.

Markdown

[Mirchev et al. "Variational State-Space Models for Localisation and Dense 3D Mapping in 6 DoF." International Conference on Learning Representations, 2021.](https://mlanthology.org/iclr/2021/mirchev2021iclr-variational/)

BibTeX

@inproceedings{mirchev2021iclr-variational,
  title     = {{Variational State-Space Models for Localisation and Dense 3D Mapping in 6 DoF}},
  author    = {Mirchev, Atanas and Kayalibay, Baris and van der Smagt, Patrick and Bayer, Justin},
  booktitle = {International Conference on Learning Representations},
  year      = {2021},
  url       = {https://mlanthology.org/iclr/2021/mirchev2021iclr-variational/}
}