Sequence to Sequence - Video to Text

Abstract

Real-world videos often have complex dynamics; methods for generating open-domain video descriptions should be senstive to temporal structure and allow both input (sequence of frames) and output (sequence of words) of variable length. To approach this problem we propose a novel end-to-end sequence-to-sequence model to generate captions for videos. For this we exploit recurrent neural networks, specifically LSTMs, which have demonstrated state-of-the-art performance in image caption generation. Our LSTM model is trained on video-sentence pairs and learns to associate a sequence of video frames to a sequence of words in order to generate a description of the event in the video clip. Our model naturally is able to learn the temporal structure of the sequence of frames as well as the sequence model of the generated sentences, i.e. a language model. We evaluate several variants of our model that exploit different visual features on a standard set of YouTube videos and two movie description datasets (M-VAD and MPII-MD).

Cite

Text

Venugopalan et al. "Sequence to Sequence - Video to Text." International Conference on Computer Vision, 2015. doi:10.1109/ICCV.2015.515

Markdown

[Venugopalan et al. "Sequence to Sequence - Video to Text." International Conference on Computer Vision, 2015.](https://mlanthology.org/iccv/2015/venugopalan2015iccv-sequence/) doi:10.1109/ICCV.2015.515

BibTeX

@inproceedings{venugopalan2015iccv-sequence,
  title     = {{Sequence to Sequence - Video to Text}},
  author    = {Venugopalan, Subhashini and Rohrbach, Marcus and Donahue, Jeffrey and Mooney, Raymond and Darrell, Trevor and Saenko, Kate},
  booktitle = {International Conference on Computer Vision},
  year      = {2015},
  doi       = {10.1109/ICCV.2015.515},
  url       = {https://mlanthology.org/iccv/2015/venugopalan2015iccv-sequence/}
}