TubeDETR: Spatio-Temporal Video Grounding with Transformers
Abstract
We consider the problem of localizing a spatio-temporal tube in a video corresponding to a given text query. This is a challenging task that requires the joint and efficient modeling of temporal, spatial and multi-modal interactions. To address this task, we propose TubeDETR, a transformer-based architecture inspired by the recent success of such models for text-conditioned object detection. Our model notably includes: (i) an efficient video and text encoder that models spatial multi-modal interactions over sparsely sampled frames and (ii) a space-time decoder that jointly performs spatio-temporal localization. We demonstrate the advantage of our proposed components through an extensive ablation study. We also evaluate our full approach on the spatio-temporal video grounding task and demonstrate improvements over the state of the art on the challenging VidSTG and HC-STVG benchmarks.
Cite
Text
Yang et al. "TubeDETR: Spatio-Temporal Video Grounding with Transformers." Conference on Computer Vision and Pattern Recognition, 2022. doi:10.1109/CVPR52688.2022.01595Markdown
[Yang et al. "TubeDETR: Spatio-Temporal Video Grounding with Transformers." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/yang2022cvpr-tubedetr/) doi:10.1109/CVPR52688.2022.01595BibTeX
@inproceedings{yang2022cvpr-tubedetr,
title = {{TubeDETR: Spatio-Temporal Video Grounding with Transformers}},
author = {Yang, Antoine and Miech, Antoine and Sivic, Josef and Laptev, Ivan and Schmid, Cordelia},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2022},
pages = {16442-16453},
doi = {10.1109/CVPR52688.2022.01595},
url = {https://mlanthology.org/cvpr/2022/yang2022cvpr-tubedetr/}
}