EPIC-KITCHENS VISOR Benchmark: VIdeo Segmentations and Object Relations
Abstract
We introduce VISOR, a new dataset of pixel annotations and a benchmark suite for segmenting hands and active objects in egocentric video. VISOR annotates videos from EPIC-KITCHENS, which comes with a new set of challenges not encountered in current video segmentation datasets. Specifically, we need to ensure both short- and long-term consistency of pixel-level annotations as objects undergo transformative interactions, e.g. an onion is peeled, diced and cooked - where we aim to obtain accurate pixel-level annotations of the peel, onion pieces, chopping board, knife, pan, as well as the acting hands. VISOR introduces an annotation pipeline, AI-powered in parts, for scalability and quality. In total, we publicly release 272K manual semantic masks of 257 object classes, 9.9M interpolated dense masks, 67K hand-object relations, covering 36 hours of 179 untrimmed videos. Along with the annotations, we introduce three challenges in video object segmentation, interaction understanding and long-term reasoning.For data, code and leaderboards: http://epic-kitchens.github.io/VISOR
Cite
Text
Darkhalil et al. "EPIC-KITCHENS VISOR Benchmark: VIdeo Segmentations and Object Relations." Neural Information Processing Systems, 2022.Markdown
[Darkhalil et al. "EPIC-KITCHENS VISOR Benchmark: VIdeo Segmentations and Object Relations." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/darkhalil2022neurips-epickitchens/)BibTeX
@inproceedings{darkhalil2022neurips-epickitchens,
title = {{EPIC-KITCHENS VISOR Benchmark: VIdeo Segmentations and Object Relations}},
author = {Darkhalil, Ahmad and Shan, Dandan and Zhu, Bin and Ma, Jian and Kar, Amlan and Higgins, Richard and Fidler, Sanja and Fouhey, David and Damen, Dima},
booktitle = {Neural Information Processing Systems},
year = {2022},
url = {https://mlanthology.org/neurips/2022/darkhalil2022neurips-epickitchens/}
}