Adaptive Feature Abstraction for Translating Video to Text

Abstract

Previous models for video captioning often use the output from a specific layer of a Convolutional Neural Network (CNN) as video features. However, the variable context-dependent semantics in the video may make it more appropriate to adaptively select features from the multiple CNN layers. We propose a new approach for generating adaptive spatiotemporal representations of videos for the captioning task. A novel attention mechanism is developed, that adaptively and sequentially focuses on different layers of CNN features (levels of feature "abstraction"), as well as local spatiotemporal regions of the feature maps at each layer. The proposed approach is evaluated on three benchmark datasets: YouTube2Text, M-VAD and MSR-VTT. Along with visualizing the results and how the model works, these experiments quantitatively demonstrate the effectiveness of the proposed adaptive spatiotemporal feature abstraction for translating videos to sentences with rich semantics.

Cite

Text

Pu et al. "Adaptive Feature Abstraction for Translating Video to Text." AAAI Conference on Artificial Intelligence, 2018. doi:10.1609/AAAI.V32I1.12245

Markdown

[Pu et al. "Adaptive Feature Abstraction for Translating Video to Text." AAAI Conference on Artificial Intelligence, 2018.](https://mlanthology.org/aaai/2018/pu2018aaai-adaptive/) doi:10.1609/AAAI.V32I1.12245

BibTeX

@inproceedings{pu2018aaai-adaptive,
  title     = {{Adaptive Feature Abstraction for Translating Video to Text}},
  author    = {Pu, Yunchen and Min, Martin Renqiang and Gan, Zhe and Carin, Lawrence},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2018},
  pages     = {7284-7291},
  doi       = {10.1609/AAAI.V32I1.12245},
  url       = {https://mlanthology.org/aaai/2018/pu2018aaai-adaptive/}
}