Learning Object Depth from Camera Motion and Video Object Segmentation
Abstract
Video object segmentation, i.e., the separation of a target object from background in video, has made significant progress on real and challenging videos in recent years. To leverage this progress in 3D applications, this paper addresses the problem of learning to estimate the depth of segmented objects given some measurement of camera motion (e.g., from robot kinematics or vehicle odometry). We achieve this by, first, introducing a diverse, extensible dataset and, second, designing a novel deep network that estimates the depth of objects using only segmentation masks and uncalibrated camera movement. Our data-generation framework creates artificial object segmentations that are scaled for changes in distance between the camera and object, and our network learns to estimate object depth even with segmentation errors. We demonstrate our approach across domains using a robot camera to locate objects from the YCB dataset and a vehicle camera to locate obstacles while driving.
Cite
Text
Griffin and Corso. "Learning Object Depth from Camera Motion and Video Object Segmentation." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58571-6_18Markdown
[Griffin and Corso. "Learning Object Depth from Camera Motion and Video Object Segmentation." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/griffin2020eccv-learning/) doi:10.1007/978-3-030-58571-6_18BibTeX
@inproceedings{griffin2020eccv-learning,
title = {{Learning Object Depth from Camera Motion and Video Object Segmentation}},
author = {Griffin, Brent A. and Corso, Jason J.},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2020},
doi = {10.1007/978-3-030-58571-6_18},
url = {https://mlanthology.org/eccv/2020/griffin2020eccv-learning/}
}