Improving Diversity in Language Models: When Temperature Fails, Change the Loss
Abstract
Increasing diversity in language models is a challenging yet essential objective. A common approach is to raise the decoding temperature. In this work, we investigate this approach through a simplistic yet common case to provide insights into why decreasing temperature can improve quality (Precision), while increasing it often fails to boost coverage (Recall). Our analysis reveals that for a model to be effectively tunable through temperature adjustments, it must be trained toward coverage. To address this, we propose rethinking loss functions in language models by leveraging the Precision-Recall framework. Our results demonstrate that this approach achieves a substantially better trade-off between Precision and Recall than merely combining negative log-likelihood training with temperature scaling. These findings offer a pathway toward more versatile and robust language modeling techniques.
Cite
Text
Verine et al. "Improving Diversity in Language Models: When Temperature Fails, Change the Loss." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Verine et al. "Improving Diversity in Language Models: When Temperature Fails, Change the Loss." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/verine2025icml-improving/)BibTeX
@inproceedings{verine2025icml-improving,
title = {{Improving Diversity in Language Models: When Temperature Fails, Change the Loss}},
author = {Verine, Alexandre and Le Bronnec, Florian and Zheng, Kunhao and Allauzen, Alexandre and Chevaleyre, Yann and Negrevergne, Benjamin},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {61266-61300},
volume = {267},
url = {https://mlanthology.org/icml/2025/verine2025icml-improving/}
}