Towards an Automatic Turing Test: Learning to Evaluate Dialogue Responses

Abstract

Automatically evaluating the quality of dialogue responses for unstructured domains is a challenging problem. Unfortunately, existing automatic evaluation metrics are biased and correlate very poorly with human judgements of response quality (Liu et al., 2016). Yet having an accurate automatic evaluation procedure is crucial for dialogue research, as it allows rapid prototyping and testing of new models with fewer expensive human evaluations. In response to this challenge, we formulate automatic dialogue evaluation as a learning problem.We present an evaluation model (ADEM)that learns to predict human-like scores to input responses, using a new dataset of human response scores. We show that the ADEM model’s predictions correlate significantly, and at a level much higher than word-overlap metrics such as BLEU, with human judgements at both the utterance and system-level. We also show that ADEM can generalize to evaluating dialogue mod-els unseen during training, an important step for automatic dialogue evaluation.

Cite

Text

Lowe et al. "Towards an Automatic Turing Test: Learning to Evaluate Dialogue Responses." International Conference on Learning Representations, 2017. doi:10.18653/v1/P17-1103

Markdown

[Lowe et al. "Towards an Automatic Turing Test: Learning to Evaluate Dialogue Responses." International Conference on Learning Representations, 2017.](https://mlanthology.org/iclr/2017/lowe2017iclr-automatic/) doi:10.18653/v1/P17-1103

BibTeX

@inproceedings{lowe2017iclr-automatic,
  title     = {{Towards an Automatic Turing Test: Learning to Evaluate Dialogue Responses}},
  author    = {Lowe, Ryan and Noseworthy, Michael and Serban, Iulian Vlad and Angelard-Gontier, Nicolas and Bengio, Yoshua and Pineau, Joelle},
  booktitle = {International Conference on Learning Representations},
  year      = {2017},
  doi       = {10.18653/v1/P17-1103},
  url       = {https://mlanthology.org/iclr/2017/lowe2017iclr-automatic/}
}