UniCBE: An Uniformity-Driven Comparing Based Evaluation Framework with Unified Multi-Objective Optimization

Abstract

Human preference plays a significant role in measuring large language models and guiding them to align with human values. Unfortunately, current comparing-based evaluation (CBE) methods typically focus on a single optimization objective, failing to effectively utilize scarce yet valuable preference signals. To address this, we delve into key factors that can enhance the accuracy, convergence, and scalability of CBE: suppressing sampling bias, balancing descending process of uncertainty, and mitigating updating uncertainty. Following the derived guidelines, we propose UniCBE, a unified uniformity-driven CBE framework which simultaneously optimize these core objectives by constructing and integrating three decoupled sampling probability matrices, each designed to ensure uniformity in specific aspects. We further ablate the optimal tuple sampling and preference aggregation strategies to achieve efficient CBE. On the AlpacaEval benchmark, UniCBE saves over 17% of evaluation budgets while achieving a Pearson correlation with ground truth exceeding 0.995, demonstrating excellent accuracy and convergence. In scenarios where new models are continuously introduced, UniCBE can even save over 50% of evaluation costs, highlighting its improved scalability.

Cite

Text

Yuan et al. "UniCBE: An Uniformity-Driven Comparing Based Evaluation Framework with Unified Multi-Objective Optimization." International Conference on Learning Representations, 2025.

Markdown

[Yuan et al. "UniCBE: An Uniformity-Driven Comparing Based Evaluation Framework with Unified Multi-Objective Optimization." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/yuan2025iclr-unicbe/)

BibTeX

@inproceedings{yuan2025iclr-unicbe,
  title     = {{UniCBE: An Uniformity-Driven Comparing Based Evaluation Framework with Unified Multi-Objective Optimization}},
  author    = {Yuan, Peiwen and Feng, Shaoxiong and Li, Yiwei and Wang, Xinglin and Zhang, Yueqi and Shi, Jiayi and Tan, Chuyi and Pan, Boyuan and Hu, Yao and Li, Kan},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/yuan2025iclr-unicbe/}
}