Multi-Modal Dependency Tree for Video Captioning
Abstract
Generating fluent and relevant language to describe visual content is critical for the video captioning task. Many existing methods generate captions using sequence models that predict words in a left-to-right order. In this paper, we investigate a graph-structured model for caption generation by explicitly modeling the hierarchical structure in the sentences to further improve the fluency and relevance of sentences. To this end, we propose a novel video captioning method that generates a sentence by first constructing a multi-modal dependency tree and then traversing the constructed tree, where the syntactic structure and semantic relationship in the sentence are represented by the tree topology. To take full advantage of the information from both vision and language, both the visual and textual representation features are encoded into each tree node. Different from existing dependency parsing methods that generate uni-modal dependency trees for language understanding, our method construct s multi-modal dependency trees for language generation of images and videos. We also propose a tree-structured reinforcement learning algorithm to effectively optimize the captioning model where a novel reward is designed by evaluating the semantic consistency between the generated sub-tree and the ground-truth tree. Extensive experiments on several video captioning datasets demonstrate the effectiveness of the proposed method.
Cite
Text
Zhao et al. "Multi-Modal Dependency Tree for Video Captioning." Neural Information Processing Systems, 2021.Markdown
[Zhao et al. "Multi-Modal Dependency Tree for Video Captioning." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/zhao2021neurips-multimodal/)BibTeX
@inproceedings{zhao2021neurips-multimodal,
title = {{Multi-Modal Dependency Tree for Video Captioning}},
author = {Zhao, Wentian and Wu, Xinxiao and Luo, Jiebo},
booktitle = {Neural Information Processing Systems},
year = {2021},
url = {https://mlanthology.org/neurips/2021/zhao2021neurips-multimodal/}
}