Self-Improving Semantic Perception for Indoor Localisation

Abstract

We propose a novel robotic system that can improve its perception during deployment. Contrary to the established approach of learning semantics from large datasets and deploying fixed models, we propose a framework in which semantic models are continuously updated on the robot to adapt to the deployment environments. By combining continual learning with self-supervision, our robotic system learns online during deployment without external supervision. We conduct real-world experiments with robots localising in 3D floorplans. Our experiments show how the robot’s semantic perception improves during deployment and how this translates into improved localisation, even across drastically different environments. We further study the risk of catastrophic forgetting that such a continuous learning setting poses. We find memory replay an effective measure to reduce forgetting and show how the robotic system can improve even when switching between different environments. On average, our system improves by 60% in segmentation and 10% in localisation accuracy compared to deployment of a fixed model, and it maintains this improvement while adapting to further environments.

Cite

Text

Blum et al. "Self-Improving Semantic Perception for Indoor Localisation." Conference on Robot Learning, 2021.

Markdown

[Blum et al. "Self-Improving Semantic Perception for Indoor Localisation." Conference on Robot Learning, 2021.](https://mlanthology.org/corl/2021/blum2021corl-selfimproving/)

BibTeX

@inproceedings{blum2021corl-selfimproving,
  title     = {{Self-Improving Semantic Perception for Indoor Localisation}},
  author    = {Blum, Hermann and Milano, Francesco and Zurbrügg, René and Siegwart, Roland and Cadena, Cesar and Gawel, Abel},
  booktitle = {Conference on Robot Learning},
  year      = {2021},
  pages     = {1211-1222},
  volume    = {164},
  url       = {https://mlanthology.org/corl/2021/blum2021corl-selfimproving/}
}