Simple Image Description Generator via a Linear Phrase-Based Approach

Abstract

Generating a novel textual description of an image is an interesting problem that connects computer vision and natural language processing. In this paper, we present a simple model that is able to generate descriptive sentences given a sample image. This model has a strong focus on the syntax of the descriptions. We train a purely bilinear model that learns a metric between an image representation (generated from a previously trained Convolutional Neural Network) and phrases that are used to described them. The system is then able to infer phrases from a given image sample. Based on caption syntax statistics, we propose a simple language model that can produce relevant descriptions for a given test image using the phrases inferred. Our approach, which is considerably simpler than state-of-the-art models, achieves comparable results on the recently release Microsoft COCO dataset.

Cite

Text

Lebret et al. "Simple Image Description Generator via a Linear Phrase-Based Approach." International Conference on Learning Representations, 2015.

Markdown

[Lebret et al. "Simple Image Description Generator via a Linear Phrase-Based Approach." International Conference on Learning Representations, 2015.](https://mlanthology.org/iclr/2015/lebret2015iclr-simple/)

BibTeX

@inproceedings{lebret2015iclr-simple,
  title     = {{Simple Image Description Generator via a Linear Phrase-Based Approach}},
  author    = {Lebret, Rémi and Pinheiro, Pedro H. O. and Collobert, Ronan},
  booktitle = {International Conference on Learning Representations},
  year      = {2015},
  url       = {https://mlanthology.org/iclr/2015/lebret2015iclr-simple/}
}