A Benchmark Dataset and Evaluation Methodology for Video Object Segmentation
Abstract
Over the years, datasets and benchmarks have proven their fundamental importance in computer vision research, enabling targeted progress and objective comparisons in many fields. At the same time, legacy datasets may impend the evolution of a field due to saturated algorithm performance and the lack of contemporary, high quality data. In this work we present a new benchmark dataset and evaluation methodology for the area of video object segmentation. The dataset, named DAVIS (Densely Annotated VIdeo Segmentation), consists of fifty high quality, Full HD video sequences, spanning multiple occurrences of common video object segmentation challenges such as occlusions, motion-blur and appearance changes. Each video is accompanied by densely annotated, pixel-accurate and per-frame ground truth segmentation. In addition, we provide a comprehensive analysis of several state-of-the-art segmentation approaches using three complementary metrics that measure the spatial extent of the segmentation, the accuracy of the silhouette contours and the temporal coherence. The results uncover strengths and weaknesses of current approaches, opening up promising directions for future works.
Cite
Text
Perazzi et al. "A Benchmark Dataset and Evaluation Methodology for Video Object Segmentation." Conference on Computer Vision and Pattern Recognition, 2016. doi:10.1109/CVPR.2016.85Markdown
[Perazzi et al. "A Benchmark Dataset and Evaluation Methodology for Video Object Segmentation." Conference on Computer Vision and Pattern Recognition, 2016.](https://mlanthology.org/cvpr/2016/perazzi2016cvpr-benchmark/) doi:10.1109/CVPR.2016.85BibTeX
@inproceedings{perazzi2016cvpr-benchmark,
title = {{A Benchmark Dataset and Evaluation Methodology for Video Object Segmentation}},
author = {Perazzi, Federico and Pont-Tuset, Jordi and McWilliams, Brian and Van Gool, Luc and Gross, Markus and Sorkine-Hornung, Alexander},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2016},
doi = {10.1109/CVPR.2016.85},
url = {https://mlanthology.org/cvpr/2016/perazzi2016cvpr-benchmark/}
}