COM Kitchens: An Unedited Overhead-View Procedural Videos Dataset a Vision-Language Benchmark

Abstract

Procedural video understanding is gaining attention in the vision and language community. Deep learning-based video analysis requires extensive data. Consequently, existing works often use web videos as training resources, making it challenging to query instructional contents from raw video observations. To address this issue, we propose a new dataset, COM Kitchens. The dataset consists of unedited overhead-view videos captured by smartphones, in which participants performed food preparation based on given recipes. Fixed-viewpoint video datasets often lack environmental diversity due to high camera setup costs. We used modern wide-angle smartphone lenses to cover cooking counters from sink to cooktop in an overhead view, capturing activity without in-person assistance. With this setup, we collected a diverse dataset by distributing smartphones to participants. With this dataset, we propose the novel video-to-text retrieval task Online Recipe Retrieval (OnRR) and new video captioning domain Dense Video Captioning on unedited Overhead-View videos (DVC-OV). Our experiments verified the capabilities and limitations of current web-video-based SOTA methods in handling these tasks. The dataset and code are available at https://doi.org/10.32130/rdata.6.1 and https://github. com/omron-sinicx/com_kitchens, respectively.

Cite

Text

Hashimoto et al. "COM Kitchens: An Unedited Overhead-View Procedural Videos Dataset a Vision-Language Benchmark." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-73650-6_8

Markdown

[Hashimoto et al. "COM Kitchens: An Unedited Overhead-View Procedural Videos Dataset a Vision-Language Benchmark." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/hashimoto2024eccv-com/) doi:10.1007/978-3-031-73650-6_8

BibTeX

@inproceedings{hashimoto2024eccv-com,
  title     = {{COM Kitchens: An Unedited Overhead-View Procedural Videos Dataset a Vision-Language Benchmark}},
  author    = {Hashimoto, Atsushi and Maeda, Koki and Hirasawa, Tosho and Harashima, Jun and Rybicki, Leszek and Fukasawa, Yusuke and Ushiku, Yoshitaka},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2024},
  doi       = {10.1007/978-3-031-73650-6_8},
  url       = {https://mlanthology.org/eccv/2024/hashimoto2024eccv-com/}
}