Benchmarking LLMs via Uncertainty Quantification
Abstract
The proliferation of open-source Large Language Models (LLMs) from various institutions has highlighted the urgent need for comprehensive evaluation methods. However, current evaluation platforms, such as the widely recognized HuggingFace open LLM leaderboard, neglect a crucial aspect -- uncertainty, which is vital for thoroughly assessing LLMs. To bridge this gap, we introduce a new benchmarking approach for LLMs that integrates uncertainty quantification. Our examination involves nine LLMs (LLM series) spanning five representative natural language processing tasks. Our findings reveal that: I) LLMs with higher accuracy may exhibit lower certainty; II) Larger-scale LLMs may display greater uncertainty compared to their smaller counterparts; and III) Instruction-finetuning tends to increase the uncertainty of LLMs. These results underscore the significance of incorporating uncertainty in the evaluation of LLMs. Our implementation is available at https://github.com/smartyfh/LLM-Uncertainty-Bench.
Cite
Text
Ye et al. "Benchmarking LLMs via Uncertainty Quantification." Neural Information Processing Systems, 2024. doi:10.52202/079017-0491Markdown
[Ye et al. "Benchmarking LLMs via Uncertainty Quantification." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/ye2024neurips-benchmarking/) doi:10.52202/079017-0491BibTeX
@inproceedings{ye2024neurips-benchmarking,
title = {{Benchmarking LLMs via Uncertainty Quantification}},
author = {Ye, Fanghua and Yang, Mingming and Pang, Jianhui and Wang, Longyue and Wong, Derek F. and Yilmaz, Emine and Shi, Shuming and Tu, Zhaopeng},
booktitle = {Neural Information Processing Systems},
year = {2024},
doi = {10.52202/079017-0491},
url = {https://mlanthology.org/neurips/2024/ye2024neurips-benchmarking/}
}