Regress, Don’t Guess: A Regression-like Loss on Number Tokens for Language Models
Abstract
While language models have exceptional capabilities at text generation, they lack a natural inductive bias for emitting numbers and thus struggle in tasks involving quantitative reasoning, especially arithmetic. One fundamental limitation is the nature of the cross-entropy (CE) loss, which assumes a nominal scale and thus cannot convey proximity between generated number tokens. In response, we here present a regression-like loss that operates purely on token level. Our proposed Number Token Loss (NTL) comes in two flavors and minimizes either the $\mathcal{L}_p$ norm or the Wasserstein distance between the numerical values of the real and predicted number tokens. NTL can easily be added to any language model and extend the CE objective during training without runtime overhead. We evaluate the proposed scheme on various mathematical datasets and find that it consistently improves performance in math-related tasks. In a direct comparison on a regression task, we find that NTL can match the performance of a regression head, despite operating on token level. Finally, we scale NTL up to 3B parameter models and observe improved performance, demonstrating its potential for seamless integration into LLMs. We hope to inspire LLM developers to improve their pretraining objectives and distribute NTL as a minimalistic and lightweight PyPI package ntloss: https://ibm.biz/ntl-pypi-repo. Development code for full paper reproduction is available separately.
Cite
Text
Zausinger et al. "Regress, Don’t Guess: A Regression-like Loss on Number Tokens for Language Models." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Zausinger et al. "Regress, Don’t Guess: A Regression-like Loss on Number Tokens for Language Models." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/zausinger2025icml-regress/)BibTeX
@inproceedings{zausinger2025icml-regress,
title = {{Regress, Don’t Guess: A Regression-like Loss on Number Tokens for Language Models}},
author = {Zausinger, Jonas and Pennig, Lars and Kozina, Anamarija and Sdahl, Sean and Sikora, Julian and Dendorfer, Adrian and Kuznetsov, Timofey and Hagog, Mohamad and Wiedemann, Nina and Chlodny, Kacper and Limbach, Vincent and Ketteler, Anna and Prein, Thorben and Singh, Vishwa Mohan and Danziger, Michael and Born, Jannis},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {73995-74017},
volume = {267},
url = {https://mlanthology.org/icml/2025/zausinger2025icml-regress/}
}