Edge-Aware Consistent Stereo Video Depth Estimation
Abstract
Video depth estimation is crucial in various applications, such as scene reconstruction and augmented reality. In contrast to the naive method of estimating depths from images, a more sophisticated approach uses temporal information, thereby eliminating flickering and geometrical inconsistencies. We propose a consistent method for dense video depth estimation; however, unlike the existing monocular methods, ours relates to stereo videos. This technique overcomes the limitations arising from the monocular input. As a benefit of using stereo inputs, a left-right consistency loss is introduced to improve the performance. Besides, we use SLAM-based camera pose estimation in the process. To address the problem of depth blurriness during test-time training (TTT), we present an edge-preserving loss function that improves the visibility of fine details while preserving geometrical consistency. We show that our edge-aware stereo video model can accurately estimate the dense depth maps.
Cite
Text
Kosheleva et al. "Edge-Aware Consistent Stereo Video Depth Estimation." European Conference on Computer Vision Workshops, 2024. doi:10.1007/978-3-031-91838-4_24Markdown
[Kosheleva et al. "Edge-Aware Consistent Stereo Video Depth Estimation." European Conference on Computer Vision Workshops, 2024.](https://mlanthology.org/eccvw/2024/kosheleva2024eccvw-edgeaware/) doi:10.1007/978-3-031-91838-4_24BibTeX
@inproceedings{kosheleva2024eccvw-edgeaware,
title = {{Edge-Aware Consistent Stereo Video Depth Estimation}},
author = {Kosheleva, Elena and Jaiswal, Sunil Prasad and Shamsafar, Faranak and Cheema, Noshaba and Illgner-Fehns, Klaus and Slusallek, Philipp},
booktitle = {European Conference on Computer Vision Workshops},
year = {2024},
pages = {398-414},
doi = {10.1007/978-3-031-91838-4_24},
url = {https://mlanthology.org/eccvw/2024/kosheleva2024eccvw-edgeaware/}
}