Phrase-Based Image Captioning

Abstract

Generating a novel textual description of an image is an interesting problem that connects computer vision and natural language processing. In this paper, we present a simple model that is able to generate descriptive sentences given a sample image. This model has a strong focus on the syntax of the descriptions. We train a purely linear model to embed an image representation (generated from a previously trained Convolutional Neural Network) into a multimodal space that is common to the images and the phrases that are used to described them. The system is then able to infer phrases from a given image sample. Based on the sentence description statistics, we propose a simple language model that can produce relevant descriptions for a given test image using the phrases inferred. Our approach, which is considerably simpler than state-of-the-art models, achieves comparable results in two popular datasets for the task: Flickr30k and the recently proposed Microsoft COCO.

Cite

Text

Lebret et al. "Phrase-Based Image Captioning." International Conference on Machine Learning, 2015.

Markdown

[Lebret et al. "Phrase-Based Image Captioning." International Conference on Machine Learning, 2015.](https://mlanthology.org/icml/2015/lebret2015icml-phrasebased/)

BibTeX

@inproceedings{lebret2015icml-phrasebased,
  title     = {{Phrase-Based Image Captioning}},
  author    = {Lebret, Remi and Pinheiro, Pedro and Collobert, Ronan},
  booktitle = {International Conference on Machine Learning},
  year      = {2015},
  pages     = {2085-2094},
  volume    = {37},
  url       = {https://mlanthology.org/icml/2015/lebret2015icml-phrasebased/}
}