A Video Representation Using Temporal Superpixels
Abstract
We develop a generative probabilistic model for temporally consistent superpixels in video sequences. In contrast to supervoxel methods, object parts in different frames are tracked by the same temporal superpixel. We explicitly model flow between frames with a bilateral Gaussian process and use this information to propagate superpixels in an online fashion. We consider four novel metrics to quantify performance of a temporal superpixel representation and demonstrate superior performance when compared to supervoxel methods.
Cite
Text
Chang et al. "A Video Representation Using Temporal Superpixels." Conference on Computer Vision and Pattern Recognition, 2013. doi:10.1109/CVPR.2013.267Markdown
[Chang et al. "A Video Representation Using Temporal Superpixels." Conference on Computer Vision and Pattern Recognition, 2013.](https://mlanthology.org/cvpr/2013/chang2013cvpr-video/) doi:10.1109/CVPR.2013.267BibTeX
@inproceedings{chang2013cvpr-video,
title = {{A Video Representation Using Temporal Superpixels}},
author = {Chang, Jason and Wei, Donglai and Iii, John W. Fisher},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2013},
doi = {10.1109/CVPR.2013.267},
url = {https://mlanthology.org/cvpr/2013/chang2013cvpr-video/}
}