Calibration, Entropy Rates, and Memory in Language Models

Abstract

Building accurate language models that capture meaningful long-term dependencies is a core challenge in natural language processing. Towards this end, we present a calibration-based approach to measure long-term discrepancies between a generative sequence model and the true distribution, and use these discrepancies to improve the model. Empirically, we show that state-of-the-art language models, including LSTMs and Transformers, are miscalibrated: the entropy rates of their generations drift dramatically upward over time. We then provide provable methods to mitigate this phenomenon. Furthermore, we show how this calibration-based approach can also be used to measure the amount of memory that language models use for prediction.

Cite

Text

Braverman et al. "Calibration, Entropy Rates, and Memory in Language Models." International Conference on Machine Learning, 2020.

Markdown

[Braverman et al. "Calibration, Entropy Rates, and Memory in Language Models." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/braverman2020icml-calibration/)

BibTeX

@inproceedings{braverman2020icml-calibration,
  title     = {{Calibration, Entropy Rates, and Memory in Language Models}},
  author    = {Braverman, Mark and Chen, Xinyi and Kakade, Sham and Narasimhan, Karthik and Zhang, Cyril and Zhang, Yi},
  booktitle = {International Conference on Machine Learning},
  year      = {2020},
  pages     = {1089-1099},
  volume    = {119},
  url       = {https://mlanthology.org/icml/2020/braverman2020icml-calibration/}
}