From Crowdsourced Data to High-Quality Benchmarks: Arena-Hard and Benchbuilder Pipeline

Abstract

The rapid evolution of Large Language Models (LLMs) has outpaced the development of model evaluation, highlighting the need for continuous curation of new, challenging benchmarks. However, manual curation of high-quality, human-aligned benchmarks is expensive and time-consuming. To address this, we introduce BenchBuilder, an automated pipeline that leverages LLMs to curate high-quality, open-ended prompts from large, crowd-sourced datasets, enabling continuous benchmark updates without human in the loop. We apply BenchBuilder to datasets such as Chatbot Arena and WildChat-1M, extracting challenging prompts and utilizing LLM-as-a-Judge for automatic model evaluation. To validate benchmark quality, we propose new metrics to measure a benchmark’s alignment with human preferences and ability to separate models. We release Arena-Hard-Auto, a benchmark consisting 500 challenging prompts curated by BenchBuilder. Arena-Hard-Auto provides 3x higher separation of model performances compared to MT-Bench and achieves 98.6% correlation with human preference rankings, all at a cost of $20. Our work sets a new framework for the scalable curation of automated benchmarks from extensive data.

Cite

Text

Li et al. "From Crowdsourced Data to High-Quality Benchmarks: Arena-Hard and Benchbuilder Pipeline." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Li et al. "From Crowdsourced Data to High-Quality Benchmarks: Arena-Hard and Benchbuilder Pipeline." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/li2025icml-crowdsourced/)

BibTeX

@inproceedings{li2025icml-crowdsourced,
  title     = {{From Crowdsourced Data to High-Quality Benchmarks: Arena-Hard and Benchbuilder Pipeline}},
  author    = {Li, Tianle and Chiang, Wei-Lin and Frick, Evan and Dunlap, Lisa and Wu, Tianhao and Zhu, Banghua and Gonzalez, Joseph E. and Stoica, Ion},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {34209-34231},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/li2025icml-crowdsourced/}
}