Joint Depth Prediction and Semantic Segmentation with Multi-View SAM
Abstract
Multi-task approaches to joint depth and segmentation prediction are well-studied for monocular images. Yet, predictions from a single-view are inherently limited, while multiple views are available in many robotics applications. On the other end of the spectrum, video-based and full 3D methods require numerous frames to perform reconstruction and segmentation. With this work we propose a Multi-View Stereo (MVS) technique for depth prediction that benefits from rich semantic features of the Segment Anything Model (SAM). This enhanced depth prediction, in turn, serves as a prompt to our Transformer-based semantic segmentation decoder. We report the mutual benefit that both tasks enjoy in our quantitative and qualitative studies on the ScanNet dataset. Our approach consistently outperforms single-task MVS and segmentation models, along with multi-task monocular methods.
Cite
Text
Shvets et al. "Joint Depth Prediction and Semantic Segmentation with Multi-View SAM." Winter Conference on Applications of Computer Vision, 2024.Markdown
[Shvets et al. "Joint Depth Prediction and Semantic Segmentation with Multi-View SAM." Winter Conference on Applications of Computer Vision, 2024.](https://mlanthology.org/wacv/2024/shvets2024wacv-joint/)BibTeX
@inproceedings{shvets2024wacv-joint,
title = {{Joint Depth Prediction and Semantic Segmentation with Multi-View SAM}},
author = {Shvets, Mykhailo and Zhao, Dongxu and Niethammer, Marc and Sengupta, Roni and Berg, Alexander C.},
booktitle = {Winter Conference on Applications of Computer Vision},
year = {2024},
pages = {1328-1338},
url = {https://mlanthology.org/wacv/2024/shvets2024wacv-joint/}
}