FACE: Evaluating Natural Language Generation with Fourier Analysis of Cross-Entropy

Abstract

Measuring the distance between machine-produced and human language is a critical open problem. Inspired by empirical findings from psycholinguistics on the periodicity of entropy in language, we propose FACE, a set of metrics based on Fourier Analysis of the estimated Cross-Entropy of language, for measuring the similarity between model-generated and human-written languages. Based on an open-ended generation task and the experimental data from previous studies, we find that FACE can effectively identify the human-model gap, scales with model size, reflects the outcomes of different sampling methods for decoding, correlates well with other evaluation metrics and with human judgment scores.

Cite

Text

Yang et al. "FACE: Evaluating Natural Language Generation with Fourier Analysis of Cross-Entropy." Neural Information Processing Systems, 2023.

Markdown

[Yang et al. "FACE: Evaluating Natural Language Generation with Fourier Analysis of Cross-Entropy." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/yang2023neurips-face/)

BibTeX

@inproceedings{yang2023neurips-face,
  title     = {{FACE: Evaluating Natural Language Generation with Fourier Analysis of Cross-Entropy}},
  author    = {Yang, Zuhao and Yuan, Yingfang and Xu, Yang and Zhan, Shuo and Bai, Huajun and Chen, Kefan},
  booktitle = {Neural Information Processing Systems},
  year      = {2023},
  url       = {https://mlanthology.org/neurips/2023/yang2023neurips-face/}
}