Rethinking Generative Large Language Model Evaluation for Semantic Comprehension

Abstract

Despite their sophisticated capabilities, large language models (LLMs) encounter a major hurdle in effective assessment. This paper first revisits the prevalent evaluation method—multiple choice question answering (MCQA), which allows for straightforward accuracy measurement. Through a comprehensive evaluation of 24 models across 11 benchmarks, we highlight several potential drawbacks of MCQA, for instance, the inconsistency between the MCQA evaluation and the generation of open-ended responses in practical scenarios. In response, we introduce an RWQ-Elo rating system, engaging 24 LLMs such as GPT-4, GPT-3.5, Google-Gemini-Pro and LLaMA-1/-2, in a two-player competitive format, with GPT-4 serving as the judge. Each LLM receives an Elo rating thereafter. This system is designed to mirror real-world usage, and for this purpose, we have compiled a new benchmark called “Real-world questions” (RWQ), comprising 20,772 authentic user inquiries. Additionally, we thoroughly analyze the characteristics of our system and compare it with prior leaderboards like Alpaca Eval and MT-Bench. Our analysis reveals the stability of our RWQ-Elo system, the feasibility of registering new models, and its potential to reshape LLM leaderboards.

Cite

Text

Wei et al. "Rethinking Generative Large Language Model Evaluation for Semantic Comprehension." International Conference on Machine Learning, 2024.

Markdown

[Wei et al. "Rethinking Generative Large Language Model Evaluation for Semantic Comprehension." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/wei2024icml-rethinking/)

BibTeX

@inproceedings{wei2024icml-rethinking,
  title     = {{Rethinking Generative Large Language Model Evaluation for Semantic Comprehension}},
  author    = {Wei, Fangyun and Chen, Xi and Luo, Lin},
  booktitle = {International Conference on Machine Learning},
  year      = {2024},
  pages     = {52525-52558},
  volume    = {235},
  url       = {https://mlanthology.org/icml/2024/wei2024icml-rethinking/}
}