Learning to Route LLMs with Confidence Tokens
Abstract
Large language models (LLMs) have demonstrated impressive performance on several tasks and are increasingly deployed in real-world applications. However, especially in high-stakes settings, it becomes vital to know when the output of an LLM may be unreliable. Depending on whether an answer is trustworthy, a system can then choose to route the question to another expert, or otherwise fall back on a safe default behavior. In this work, we study the extent to which LLMs can reliably indicate confidence in their answers, and how this notion of confidence can translate into downstream accuracy gains. We propose Self-Reflection with Error-based Feedback (Self-REF), a lightweight training strategy to teach LLMs to express confidence in whether their answers are correct in a reliable manner. Self-REF introduces confidence tokens into the LLM, from which a confidence score can be extracted. Compared to conventional approaches such as verbalizing confidence and examining token probabilities, we demonstrate empirically that confidence tokens show significant improvements in downstream routing and rejection learning tasks.
Cite
Text
Chuang et al. "Learning to Route LLMs with Confidence Tokens." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Chuang et al. "Learning to Route LLMs with Confidence Tokens." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/chuang2025icml-learning/)BibTeX
@inproceedings{chuang2025icml-learning,
title = {{Learning to Route LLMs with Confidence Tokens}},
author = {Chuang, Yu-Neng and Sarma, Prathusha Kameswara and Gopalan, Parikshit and Boccio, John and Bolouki, Sara and Hu, Xia and Zhou, Helen},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {10859-10878},
volume = {267},
url = {https://mlanthology.org/icml/2025/chuang2025icml-learning/}
}