Reinforcing an Image Caption Generator Using Off-Line Human Feedback

Abstract

Human ratings are currently the most accurate way to assess the quality of an image captioning model, yet most often the only used outcome of an expensive human rating evaluation is a few overall statistics over the evaluation dataset. In this paper, we show that the signal from instance-level human caption ratings can be leveraged to improve captioning models, even when the amount of caption ratings is several orders of magnitude less than the caption training data. We employ a policy gradient method to maximize the human ratings as rewards in an off-policy reinforcement learning setting, where policy gradients are estimated by samples from a distribution that focuses on the captions in a caption ratings dataset. Our empirical evidence indicates that the proposed method learns to generalize the human raters' judgments to a previously unseen set of images, as judged by a different set of human judges, and additionally on a different, multi-dimensional side-by-side human evaluation procedure.

Cite

Text

Seo et al. "Reinforcing an Image Caption Generator Using Off-Line Human Feedback." AAAI Conference on Artificial Intelligence, 2020. doi:10.1609/AAAI.V34I03.5655

Markdown

[Seo et al. "Reinforcing an Image Caption Generator Using Off-Line Human Feedback." AAAI Conference on Artificial Intelligence, 2020.](https://mlanthology.org/aaai/2020/seo2020aaai-reinforcing/) doi:10.1609/AAAI.V34I03.5655

BibTeX

@inproceedings{seo2020aaai-reinforcing,
  title     = {{Reinforcing an Image Caption Generator Using Off-Line Human Feedback}},
  author    = {Seo, Paul Hongsuck and Sharma, Piyush and Levinboim, Tomer and Han, Bohyung and Soricut, Radu},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2020},
  pages     = {2693-2700},
  doi       = {10.1609/AAAI.V34I03.5655},
  url       = {https://mlanthology.org/aaai/2020/seo2020aaai-reinforcing/}
}