Gaussian Mixture Models for Temporal Depth Fusion
Abstract
Sensing the 3D environment of a moving robot is essential for collision avoidance. Most 3D sensors produce dense depth maps, which are subject to imperfections due to various environmental factors. Temporal fusion of depth maps is crucial to overcome those. Temporal fusion is traditionally done in 3D space with voxel data structures, but it can be approached by temporal fusion in image space, with potential benefits in reduced memory and computational cost for applications like reactive collision avoidance for micro air vehicles. In this paper, we present an efficient Gaussian Mixture Models based depth map fusion approach, introducing an online update scheme for dense representations. The environment is modeled from an ego-centric point of view, where each pixel is represented by a mixture of Gaussian inverse-depth models. Consecutive frames are related to each other by transformations obtained from visual odometry. This approach achieves better accuracy than alternative image space depth map fusion techniques at lower computational cost.
Cite
Text
Çigla et al. "Gaussian Mixture Models for Temporal Depth Fusion." IEEE/CVF Winter Conference on Applications of Computer Vision, 2017. doi:10.1109/WACV.2017.104Markdown
[Çigla et al. "Gaussian Mixture Models for Temporal Depth Fusion." IEEE/CVF Winter Conference on Applications of Computer Vision, 2017.](https://mlanthology.org/wacv/2017/cigla2017wacv-gaussian/) doi:10.1109/WACV.2017.104BibTeX
@inproceedings{cigla2017wacv-gaussian,
title = {{Gaussian Mixture Models for Temporal Depth Fusion}},
author = {Çigla, Cevahir and Brockers, Roland and Matthies, Larry H.},
booktitle = {IEEE/CVF Winter Conference on Applications of Computer Vision},
year = {2017},
pages = {889-897},
doi = {10.1109/WACV.2017.104},
url = {https://mlanthology.org/wacv/2017/cigla2017wacv-gaussian/}
}