Multi-Robot Scene Completion: Towards Task-Agnostic Collaborative Perception
Abstract
Collaborative perception learns how to share information among multiple robots to perceive the environment better than individually done. Past research on this has been task-specific, such as detection or segmentation. Yet this leads to different information sharing for different tasks, hindering the large-scale deployment of collaborative perception. We propose the first task-agnostic collaborative perception paradigm that learns a single collaboration module in a self-supervised manner for different downstream tasks. This is done by a novel task termed multi-robot scene completion, where each robot learns to effectively share information for reconstructing a complete scene viewed by all robots. Moreover, we propose a spatiotemporal autoencoder (STAR) that amortizes over time the communication cost by spatial sub-sampling and temporal mixing. Extensive experiments validate our method’s effectiveness on scene completion and collaborative perception in autonomous driving scenarios. Our code is available at https://coperception.github.io/star/.
Cite
Text
Li et al. "Multi-Robot Scene Completion: Towards Task-Agnostic Collaborative Perception." Conference on Robot Learning, 2022.Markdown
[Li et al. "Multi-Robot Scene Completion: Towards Task-Agnostic Collaborative Perception." Conference on Robot Learning, 2022.](https://mlanthology.org/corl/2022/li2022corl-multirobot/)BibTeX
@inproceedings{li2022corl-multirobot,
title = {{Multi-Robot Scene Completion: Towards Task-Agnostic Collaborative Perception}},
author = {Li, Yiming and Zhang, Juexiao and Ma, Dekun and Wang, Yue and Feng, Chen},
booktitle = {Conference on Robot Learning},
year = {2022},
pages = {2062-2072},
volume = {205},
url = {https://mlanthology.org/corl/2022/li2022corl-multirobot/}
}