Towards Long-Form Video Understanding

Abstract

Our world offers a never-ending stream of visual stimuli, yet today's vision systems only accurately recognize patterns within a few seconds. These systems understand the present, but fail to contextualize it in past or future events. In this paper, we study long-form video understanding. We introduce a framework for modeling long-form videos and develop evaluation protocols on large-scale datasets. We show that existing state-of-the-art short-term models are limited for long-form tasks. A novel object-centric transformer-based video recognition architecture performs significantly better on 7 diverse tasks. It also outperforms comparable state-of-the-art on the AVA dataset.

Cite

Text

Wu and Krahenbuhl. "Towards Long-Form Video Understanding." Conference on Computer Vision and Pattern Recognition, 2021. doi:10.1109/CVPR46437.2021.00192

Markdown

[Wu and Krahenbuhl. "Towards Long-Form Video Understanding." Conference on Computer Vision and Pattern Recognition, 2021.](https://mlanthology.org/cvpr/2021/wu2021cvpr-longform/) doi:10.1109/CVPR46437.2021.00192

BibTeX

@inproceedings{wu2021cvpr-longform,
  title     = {{Towards Long-Form Video Understanding}},
  author    = {Wu, Chao-Yuan and Krahenbuhl, Philipp},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2021},
  pages     = {1884-1894},
  doi       = {10.1109/CVPR46437.2021.00192},
  url       = {https://mlanthology.org/cvpr/2021/wu2021cvpr-longform/}
}