Continual Neural Mapping: Learning an Implicit Scene Representation from Sequential Observations
Abstract
Recent advances have enabled a single neural network to serve as an implicit scene representation, establishing the mapping function between spatial coordinates and scene properties. In this paper, we make a further step towards continual learning of the implicit scene representation directly from sequential observations, namely Continual Neural Mapping. The proposed problem setting bridges the gap between batch-trained implicit neural representations and commonly used streaming data in robotics and vision communities. We introduce an experience replay approach to tackle an exemplary task of continual neural mapping: approximating a continuous signed distance function (SDF) from sequential depth images as a scene geometry representation. We show for the first time that a single network can represent scene geometry over time continually without catastrophic forgetting, while achieving promising trade-offs between accuracy and efficiency.
Cite
Text
Yan et al. "Continual Neural Mapping: Learning an Implicit Scene Representation from Sequential Observations." International Conference on Computer Vision, 2021. doi:10.1109/ICCV48922.2021.01549Markdown
[Yan et al. "Continual Neural Mapping: Learning an Implicit Scene Representation from Sequential Observations." International Conference on Computer Vision, 2021.](https://mlanthology.org/iccv/2021/yan2021iccv-continual/) doi:10.1109/ICCV48922.2021.01549BibTeX
@inproceedings{yan2021iccv-continual,
title = {{Continual Neural Mapping: Learning an Implicit Scene Representation from Sequential Observations}},
author = {Yan, Zike and Tian, Yuxin and Shi, Xuesong and Guo, Ping and Wang, Peng and Zha, Hongbin},
booktitle = {International Conference on Computer Vision},
year = {2021},
pages = {15782-15792},
doi = {10.1109/ICCV48922.2021.01549},
url = {https://mlanthology.org/iccv/2021/yan2021iccv-continual/}
}