The Over-Certainty Phenomenon in Modern Test-Time Adaptation Algorithms

Abstract

When neural networks are confronted with unfamiliar data that deviate from their training set, this signifies a domain shift. While these networks output predictions on their inputs, they typically fail to account for their level of familiarity with these novel observations. Prevailing works navigate test-time adaptation with the goal of curtailing model entropy, yet they unintentionally produce models that struggle with sub-optimal calibration—a dilemma we term the over-certainty phenomenon. This over-certainty in predictions can be particularly dangerous in the setting of domain shifts, as it may lead to misplaced trust. In this paper, we propose a solution that not only maintains accuracy but also addresses calibration by mitigating the over-certainty phenomenon. To do this, we introduce a certainty regularizer that dynamically adjusts pseudo-label confidence by accounting for both backbone entropy and logit norm. Our method achieves state-of-the-art performance in terms of Expected Calibration Error and Negative Log Likelihood, all while maintaining parity in accuracy.

Cite

Text

Amin and Kim. "The Over-Certainty Phenomenon in Modern Test-Time Adaptation Algorithms." Transactions on Machine Learning Research, 2025.

Markdown

[Amin and Kim. "The Over-Certainty Phenomenon in Modern Test-Time Adaptation Algorithms." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/amin2025tmlr-overcertainty/)

BibTeX

@article{amin2025tmlr-overcertainty,
  title     = {{The Over-Certainty Phenomenon in Modern Test-Time Adaptation Algorithms}},
  author    = {Amin, Fin and Kim, Jung-Eun},
  journal   = {Transactions on Machine Learning Research},
  year      = {2025},
  url       = {https://mlanthology.org/tmlr/2025/amin2025tmlr-overcertainty/}
}