BERTScore: Evaluating Text Generation with BERT

Abstract

We propose BERTScore, an automatic evaluation metric for text generation. Analogously to common metrics, BERTScore computes a similarity score for each token in the candidate sentence with each token in the reference sentence. However, instead of exact matches, we compute token similarity using contextual embeddings. We evaluate using the outputs of 363 machine translation and image captioning systems. BERTScore correlates better with human judgments and provides stronger model selection performance than existing metrics. Finally, we use an adversarial paraphrase detection task and show that BERTScore is more robust to challenging examples compared to existing metrics.

Cite

Text

Zhang et al. "BERTScore: Evaluating Text Generation with BERT." International Conference on Learning Representations, 2020.

Markdown

[Zhang et al. "BERTScore: Evaluating Text Generation with BERT." International Conference on Learning Representations, 2020.](https://mlanthology.org/iclr/2020/zhang2020iclr-bertscore/)

BibTeX

@inproceedings{zhang2020iclr-bertscore,
  title     = {{BERTScore: Evaluating Text Generation with BERT}},
  author    = {Zhang, Tianyi and Kishore, Varsha and Wu, Felix and Weinberger, Kilian Q. and Artzi, Yoav},
  booktitle = {International Conference on Learning Representations},
  year      = {2020},
  url       = {https://mlanthology.org/iclr/2020/zhang2020iclr-bertscore/}
}