SEAL: Self-Supervised Embodied Active Learning Using Exploration and 3D Consistency
Abstract
In this paper, we explore how we can build upon the data and models of Internet images and use them to adapt to robot vision without requiring any extra labels. We present a framework called Self-supervised Embodied Active Learning (SEAL). It utilizes perception models trained on internet images to learn an active exploration policy. The observations gathered by this exploration policy are labelled using 3D consistency and used to improve the perception model. We build and utilize 3D semantic maps to learn both action and perception in a completely self-supervised manner. The semantic map is used to compute an intrinsic motivation reward for training the exploration policy and for labelling the agent observations using spatio-temporal 3D consistency and label propagation. We demonstrate that the SEAL framework can be used to close the action-perception loop: it improves object detection and instance segmentation performance of a pretrained perception model by just moving around in training environments and the improved perception model can be used to improve Object Goal Navigation.
Cite
Text
Chaplot et al. "SEAL: Self-Supervised Embodied Active Learning Using Exploration and 3D Consistency." Neural Information Processing Systems, 2021.Markdown
[Chaplot et al. "SEAL: Self-Supervised Embodied Active Learning Using Exploration and 3D Consistency." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/chaplot2021neurips-seal/)BibTeX
@inproceedings{chaplot2021neurips-seal,
title = {{SEAL: Self-Supervised Embodied Active Learning Using Exploration and 3D Consistency}},
author = {Chaplot, Devendra Singh and Dalal, Murtaza and Gupta, Saurabh and Malik, Jitendra and Salakhutdinov, Ruslan},
booktitle = {Neural Information Processing Systems},
year = {2021},
url = {https://mlanthology.org/neurips/2021/chaplot2021neurips-seal/}
}