Temporal Aggregate Representations for Long-Range Video Understanding

Abstract

Future prediction, especially in long-range videos, requires reasoning from current and past observations. In this work, we address questions of temporal extent, scaling, and level of semantic abstraction with a flexible multi-granular temporal aggregation framework. We show that it is possible to achieve state of the art in both next action and dense anticipation with simple techniques such as max-pooling and attention. To demonstrate the anticipation capabilities of our model, we conduct experiments on Breakfast, 50Salads, and EPIC-Kitchens datasets, where we achieve state-of-the-art results. With minimal modifications, our model can also be extended for video segmentation and action recognition.

Cite

Text

Sener et al. "Temporal Aggregate Representations for Long-Range Video Understanding." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58517-4_10

Markdown

[Sener et al. "Temporal Aggregate Representations for Long-Range Video Understanding." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/sener2020eccv-temporal/) doi:10.1007/978-3-030-58517-4_10

BibTeX

@inproceedings{sener2020eccv-temporal,
  title     = {{Temporal Aggregate Representations for Long-Range Video Understanding}},
  author    = {Sener, Fadime and Singhania, Dipika and Yao, Angela},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2020},
  doi       = {10.1007/978-3-030-58517-4_10},
  url       = {https://mlanthology.org/eccv/2020/sener2020eccv-temporal/}
}