Extending Temperature Scaling with Homogenizing Maps
Abstract
As machine learning models continue to grow more complex, poor calibration significantly limits the reliability of their predictions. Temperature scaling learns a single temperature parameter to scale the output logits, and despite its simplicity, remains one of the most effective post-hoc recalibration methods. We identify one of temperature scaling's defining attributes, that it increases the uncertainty of the predictions in a manner that we term homogenization, and propose to learn the optimal recalibration mapping from a larger class of functions that satisfies this property. We demonstrate the advantage of our method over temperature scaling in both calibration and out-of-distribution detection. Additionally, we extend our methodology and experimental evaluation to recalibration in the Bayesian setting.
Cite
Text
Qian et al. "Extending Temperature Scaling with Homogenizing Maps." Journal of Machine Learning Research, 2025.Markdown
[Qian et al. "Extending Temperature Scaling with Homogenizing Maps." Journal of Machine Learning Research, 2025.](https://mlanthology.org/jmlr/2025/qian2025jmlr-extending/)BibTeX
@article{qian2025jmlr-extending,
title = {{Extending Temperature Scaling with Homogenizing Maps}},
author = {Qian, Christopher and Liang, Feng and Adams, Jason},
journal = {Journal of Machine Learning Research},
year = {2025},
pages = {1-46},
volume = {26},
url = {https://mlanthology.org/jmlr/2025/qian2025jmlr-extending/}
}