Spatio-Temporal Ranked-Attention Networks for Video Captioning
Abstract
Generating video descriptions automatically is a challenging task that involves a complex interplay between spatio-temporal visual features and language models. Given that videos consist of spatial (frame-level) features and their temporal evolutions, an effective captioning model should be able to attend to these different cues selectively. To this end, we propose a Spatio-Temporal and Temporo-Spatial (STaTS) attention model which, conditioned on the language state, hierarchically combines spatial and temporal attention to videos in two different orders: (i) a spatio-temporal (ST) sub-model, which first attends to regions that have temporal evolution, then temporally pools the features from these regions; and (ii) a temporo-spatial (TS) sub-model, which first decides a single frame to attend to, then applies spatial attention within that frame. We propose a novel LSTM-based temporal ranking function, which we call ranked attention, for the ST model to capture action dynamics. Our entire framework is trained end-to-end. We provide experiments on two benchmark datasets: MSVD and MSR-VTT. Our results demonstrate the synergy between the ST and TS modules, outperforming recent state-of-the-art methods.
Cite
Text
Cherian et al. "Spatio-Temporal Ranked-Attention Networks for Video Captioning." Winter Conference on Applications of Computer Vision, 2020.Markdown
[Cherian et al. "Spatio-Temporal Ranked-Attention Networks for Video Captioning." Winter Conference on Applications of Computer Vision, 2020.](https://mlanthology.org/wacv/2020/cherian2020wacv-spatiotemporal/)BibTeX
@inproceedings{cherian2020wacv-spatiotemporal,
title = {{Spatio-Temporal Ranked-Attention Networks for Video Captioning}},
author = {Cherian, Anoop and Wang, Jue and Hori, Chiori and Marks, Tim},
booktitle = {Winter Conference on Applications of Computer Vision},
year = {2020},
url = {https://mlanthology.org/wacv/2020/cherian2020wacv-spatiotemporal/}
}