Future Transformer for Long-Term Action Anticipation

Abstract

The task of predicting future actions from a video is crucial for a real-world agent interacting with others. When anticipating actions in the distant future, we humans typically consider long-term relations over the whole sequence of actions, i.e., not only observed actions in the past but also potential actions in the future. In a similar spirit, we propose an end-to-end attention model for action anticipation, dubbed Future Transformer (FUTR), that leverages global attention over all input frames and output tokens to predict a minutes-long sequence of future actions. Unlike the previous autoregressive models, the proposed method learns to predict the whole sequence of future actions in parallel decoding, enabling more accurate and fast inference for long-term anticipation. We evaluate our methods on two standard benchmarks for long-term action anticipation, Breakfast and 50 Salads, achieving state-of-the-art results.

Cite

Text

Gong et al. "Future Transformer for Long-Term Action Anticipation." Conference on Computer Vision and Pattern Recognition, 2022. doi:10.1109/CVPR52688.2022.00306

Markdown

[Gong et al. "Future Transformer for Long-Term Action Anticipation." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/gong2022cvpr-future/) doi:10.1109/CVPR52688.2022.00306

BibTeX

@inproceedings{gong2022cvpr-future,
  title     = {{Future Transformer for Long-Term Action Anticipation}},
  author    = {Gong, Dayoung and Lee, Joonseok and Kim, Manjin and Ha, Seong Jong and Cho, Minsu},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2022},
  pages     = {3052-3061},
  doi       = {10.1109/CVPR52688.2022.00306},
  url       = {https://mlanthology.org/cvpr/2022/gong2022cvpr-future/}
}