Learning to Guide Decoding for Image Captioning

Abstract

Recently, much advance has been made in image captioning, and an encoder-decoder framework has achieved outstanding performance for this task. In this paper, we propose an extension of the encoder-decoder framework by adding a component called guiding network. The guiding network models the attribute properties of input images, and its output is leveraged to compose the input of the decoder at each time step. The guiding network can be plugged into the current encoder-decoder framework and trained in an end-to-end manner. Hence, the guiding vector can be adaptively learned according to the signal from the decoder, making itself to embed information from both image and language. Additionally, discriminative supervision can be employed to further improve the quality of guidance. The advantages of our proposed approach are verified by experiments carried out on the MS COCO dataset.

Cite

Text

Jiang et al. "Learning to Guide Decoding for Image Captioning." AAAI Conference on Artificial Intelligence, 2018. doi:10.1609/AAAI.V32I1.12283

Markdown

[Jiang et al. "Learning to Guide Decoding for Image Captioning." AAAI Conference on Artificial Intelligence, 2018.](https://mlanthology.org/aaai/2018/jiang2018aaai-learning/) doi:10.1609/AAAI.V32I1.12283

BibTeX

@inproceedings{jiang2018aaai-learning,
  title     = {{Learning to Guide Decoding for Image Captioning}},
  author    = {Jiang, Wenhao and Ma, Lin and Chen, Xinpeng and Zhang, Hanwang and Liu, Wei},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2018},
  pages     = {6959-6966},
  doi       = {10.1609/AAAI.V32I1.12283},
  url       = {https://mlanthology.org/aaai/2018/jiang2018aaai-learning/}
}