End-to-End Spatio-Temporal Action Localisation with Video Transformers
Abstract
The most performant spatio-temporal action localisation models use external person proposals and complex external memory banks. We propose a fully end-to-end transformer based model that directly ingests an input video and outputs tubelets -- a sequence of bounding boxes and the action classes at each frame. Our flexible model can be trained with either sparse bounding-box supervision on individual frames or full tubelet annotations. And in both cases it predicts coherent tubelets as the output. Moreover our end-to-end model requires no additional pre-processing in the form of proposals or post-processing in terms of non-maximal suppression. We perform extensive ablation experiments and significantly advance the state-of-the-art on five different spatio-temporal action localisation benchmarks with both sparse keyframes and full tubelet annotations.
Cite
Text
Gritsenko et al. "End-to-End Spatio-Temporal Action Localisation with Video Transformers." Conference on Computer Vision and Pattern Recognition, 2024. doi:10.1109/CVPR52733.2024.01739Markdown
[Gritsenko et al. "End-to-End Spatio-Temporal Action Localisation with Video Transformers." Conference on Computer Vision and Pattern Recognition, 2024.](https://mlanthology.org/cvpr/2024/gritsenko2024cvpr-endtoend/) doi:10.1109/CVPR52733.2024.01739BibTeX
@inproceedings{gritsenko2024cvpr-endtoend,
title = {{End-to-End Spatio-Temporal Action Localisation with Video Transformers}},
author = {Gritsenko, Alexey A. and Xiong, Xuehan and Djolonga, Josip and Dehghani, Mostafa and Sun, Chen and Lucic, Mario and Schmid, Cordelia and Arnab, Anurag},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2024},
pages = {18373-18383},
doi = {10.1109/CVPR52733.2024.01739},
url = {https://mlanthology.org/cvpr/2024/gritsenko2024cvpr-endtoend/}
}