Learning Observation Models with Incremental Non-Differentiable Graph Optimizers in the Loop for Robotics State Estimation

Abstract

We consider the problem of learning observation models for robot state estimation with incremental non-differentiable optimizers in the loop. Convergence to the correct belief over the robot state is heavily dependent on a proper tuning of observation models which serve as input to the optimizer. We propose a gradient-based learning method which converges much quicker to model estimates that lead to solutions of much better quality compared to an existing state-of-the-art method as measured by the tracking accuracy over unseen robot test trajectories.

Cite

Text

Qadri and Kaess. "Learning Observation Models with Incremental Non-Differentiable Graph Optimizers in the Loop for Robotics State Estimation." ICML 2023 Workshops: Differentiable_Almost_Everything, 2023.

Markdown

[Qadri and Kaess. "Learning Observation Models with Incremental Non-Differentiable Graph Optimizers in the Loop for Robotics State Estimation." ICML 2023 Workshops: Differentiable_Almost_Everything, 2023.](https://mlanthology.org/icmlw/2023/qadri2023icmlw-learning/)

BibTeX

@inproceedings{qadri2023icmlw-learning,
  title     = {{Learning Observation Models with Incremental Non-Differentiable Graph Optimizers in the Loop for Robotics State Estimation}},
  author    = {Qadri, Mohamad and Kaess, Michael},
  booktitle = {ICML 2023 Workshops: Differentiable_Almost_Everything},
  year      = {2023},
  url       = {https://mlanthology.org/icmlw/2023/qadri2023icmlw-learning/}
}