One "Ruler" for All Languages: Multi-Lingual Dialogue Evaluation with Adversarial Multi-Task Learning

Abstract

Automatic evaluating the performance of Open-domain dialogue system is a challenging problem. Recent work in neural network-based metrics has shown promising opportunities for automatic dialogue evaluation. However, existing methods mainly focus on monolingual evaluation, in which the trained metric is not flexible enough to transfer across different languages. To address this issue, we propose an adversarial multi-task neural metric (ADVMT) for multi-lingual dialogue evaluation, with shared feature extraction across languages. We evaluate the proposed model in two different languages. Experiments show that the adversarial multi-task neural metric achieves a high correlation with human annotation, which yields better performance than monolingual ones and various existing metrics.

Cite

Text

Tong et al. "One "Ruler" for All Languages: Multi-Lingual Dialogue Evaluation with Adversarial Multi-Task Learning." International Joint Conference on Artificial Intelligence, 2018. doi:10.24963/IJCAI.2018/616

Markdown

[Tong et al. "One "Ruler" for All Languages: Multi-Lingual Dialogue Evaluation with Adversarial Multi-Task Learning." International Joint Conference on Artificial Intelligence, 2018.](https://mlanthology.org/ijcai/2018/tong2018ijcai-one/) doi:10.24963/IJCAI.2018/616

BibTeX

@inproceedings{tong2018ijcai-one,
  title     = {{One "Ruler" for All Languages: Multi-Lingual Dialogue Evaluation with Adversarial Multi-Task Learning}},
  author    = {Tong, Xiaowei and Fu, Zhenxin and Shang, Mingyue and Zhao, Dongyan and Yan, Rui},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2018},
  pages     = {4432-4438},
  doi       = {10.24963/IJCAI.2018/616},
  url       = {https://mlanthology.org/ijcai/2018/tong2018ijcai-one/}
}