Am-ELO: A Stable Framework for Arena-Based LLM Evaluation

Abstract

Arena-based evaluation is a fundamental yet significant evaluation paradigm for modern AI models, especially large language models (LLMs). Existing framework based on ELO rating system suffers from the inevitable instability problem due to ranking inconsistency and the lack of attention to the varying abilities of annotators. In this paper, we introduce a novel stable arena framework to address these issues by enhancing the ELO Rating System. Specifically, we replace the iterative update method with a Maximum Likelihood Estimation (MLE) approach, m-ELO, and provide theoretical proof of the consistency and stability of the MLE approach for model ranking. Additionally, we proposed the am-ELO, which modify the Elo Rating’s probability function to incorporate annotator abilities, enabling the simultaneous estimation of model scores and annotator reliability. Experiments demonstrate that this method ensures stability, proving that this framework offers a more robust, accurate, and stable evaluation method for LLMs.

Cite

Text

Liu et al. "Am-ELO: A Stable Framework for Arena-Based LLM Evaluation." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Liu et al. "Am-ELO: A Stable Framework for Arena-Based LLM Evaluation." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/liu2025icml-amelo/)

BibTeX

@inproceedings{liu2025icml-amelo,
  title     = {{Am-ELO: A Stable Framework for Arena-Based LLM Evaluation}},
  author    = {Liu, Zirui and Li, Jiatong and Zhuang, Yan and Liu, Qi and Shen, Shuanghong and Ouyang, Jie and Cheng, Mingyue and Wang, Shijin},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {38857-38868},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/liu2025icml-amelo/}
}