TokenLearner: Adaptive Space-Time Tokenization for Videos

Abstract

In this paper, we introduce a novel visual representation learning which relies on a handful of adaptively learned tokens, and which is applicable to both image and video understanding tasks. Instead of relying on hand-designed splitting strategies to obtain visual tokens and processing a large number of densely sampled patches for attention, our approach learns to mine important tokens in visual data. This results in efficiently and effectively finding a few important visual tokens and enables modeling of pairwise attention between such tokens, over a longer temporal horizon for videos, or the spatial content in image frames. Our experiments demonstrate strong performance on several challenging benchmarks for video recognition tasks. Importantly, due to our tokens being adaptive, we accomplish competitive results at significantly reduced computational cost. We establish new state-of-the-arts on multiple video datasets, including Kinetics-400, Kinetics-600, Charades, and AViD.

Cite

Text

Ryoo et al. "TokenLearner: Adaptive Space-Time Tokenization for Videos." Neural Information Processing Systems, 2021.

Markdown

[Ryoo et al. "TokenLearner: Adaptive Space-Time Tokenization for Videos." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/ryoo2021neurips-tokenlearner/)

BibTeX

@inproceedings{ryoo2021neurips-tokenlearner,
  title     = {{TokenLearner: Adaptive Space-Time Tokenization for Videos}},
  author    = {Ryoo, Michael and Piergiovanni, Aj and Arnab, Anurag and Dehghani, Mostafa and Angelova, Anelia},
  booktitle = {Neural Information Processing Systems},
  year      = {2021},
  url       = {https://mlanthology.org/neurips/2021/ryoo2021neurips-tokenlearner/}
}