LabelMe Video: Building a Video Database with Human Annotations
Abstract
Currently, video analysis algorithms suffer from lack of information regarding the objects present, their interactions, as well as from missing comprehensive annotated video databases for benchmarking. We designed an online and openly accessible video annotation system that allows anyone with a browser and internet access to efÀciently annotate object category, shape, motion, and activity information in real-world videos. The annotations are also complemented with knowledge from static image databases to infer occlusion and depth information. Using this system, we have built a scalable video database composed of diverse video samples and paired with human-guided annotations. We complement this paper demonstrating potential uses of this database by studying motion statistics as well as cause-effect motion relationships between objects.
Cite
Text
Yuen et al. "LabelMe Video: Building a Video Database with Human Annotations." IEEE/CVF International Conference on Computer Vision, 2009. doi:10.1109/ICCV.2009.5459289Markdown
[Yuen et al. "LabelMe Video: Building a Video Database with Human Annotations." IEEE/CVF International Conference on Computer Vision, 2009.](https://mlanthology.org/iccv/2009/yuen2009iccv-labelme/) doi:10.1109/ICCV.2009.5459289BibTeX
@inproceedings{yuen2009iccv-labelme,
title = {{LabelMe Video: Building a Video Database with Human Annotations}},
author = {Yuen, Jenny and Russell, Bryan C. and Liu, Ce and Torralba, Antonio},
booktitle = {IEEE/CVF International Conference on Computer Vision},
year = {2009},
pages = {1451-1458},
doi = {10.1109/ICCV.2009.5459289},
url = {https://mlanthology.org/iccv/2009/yuen2009iccv-labelme/}
}