Introducing Background Temperature to Characterise Hidden Randomness in Large Language Models

Abstract

Even when decoding with temperature $T=0$, large language models (LLMs) can produce divergent outputs for identical inputs. Recent works align in highlighting implementation-level sources of nondeterminism, including batch-size variation, kernel non-invariance, and floating-point non-associativity. In this work, we formalize this behavior by introducing the notion of background temperature $T_{\mathrm{bg}}$, the effective temperature induced by an implementation-dependent perturbation process observed even when nominal $T=0$. We provide clean definitions, show how $T_{\mathrm{bg}}$ relates to a stochastic perturbation governed by the inference environment $I$, and propose an empirical protocol to estimate $T_{bg}$ via the equivalent temperature $T_n(I)$ of an ideal reference system. We conclude with a set of pilot experiments run on a representative pool from the major LLM providers that demonstrate the idea and outline implications for reproducibility, evaluation, and deployment.

Cite

Text

Messina and Scotta. "Introducing Background Temperature to Characterise Hidden Randomness in Large Language Models." Transactions on Machine Learning Research, 2026.

Markdown

[Messina and Scotta. "Introducing Background Temperature to Characterise Hidden Randomness in Large Language Models." Transactions on Machine Learning Research, 2026.](https://mlanthology.org/tmlr/2026/messina2026tmlr-introducing/)

BibTeX

@article{messina2026tmlr-introducing,
  title     = {{Introducing Background Temperature to Characterise Hidden Randomness in Large Language Models}},
  author    = {Messina, Alberto and Scotta, Stefano},
  journal   = {Transactions on Machine Learning Research},
  year      = {2026},
  url       = {https://mlanthology.org/tmlr/2026/messina2026tmlr-introducing/}
}