Any Large Language Model Can Be a Reliable Judge: Debiasing with a Reasoning-Based Bias Detector
Abstract
LLM-as-a-Judge has emerged as a promising tool for automatically evaluating generated outputs, but its reliability is often undermined by potential biases in judgment. Existing efforts to mitigate these biases face key limitations: in-context learning-based methods fail to address rooted biases due to the evaluator’s limited capacity for self-reflection, whereas fine-tuning is not applicable to all evaluator types, especially closed-source models. To address this challenge, we introduce the **R**easoning-based **B**ias **D**etector (RBD), which is a plug-in module that identifies biased evaluations and generates structured reasoning to guide evaluator self-correction. Rather than modifying the evaluator itself, RBD operates externally and engages in an iterative process of bias detection and feedback-driven revision. To support its development, we design a complete pipeline consisting of biased dataset construction, supervision collection, distilled reasoning-based fine-tuning of RBD, and integration with LLM evaluators. We fine-tune four sizes of RBD models, ranging from 1.5B to 14B, and observe consistent performance improvements across all scales. Experimental results on 4 bias types—verbosity, position, bandwagon, and sentiment—evaluated using 8 LLM evaluators demonstrate RBD’s strong effectiveness. For example, the RBD-8B model improves evaluation accuracy by an average of 18.5% and consistency by 10.9%, and surpasses prompting-based baselines and fine-tuned judges by 12.8% and 17.2%, respectively. These results highlight RBD’s effectiveness and scalability. Additional experiments further demonstrate its strong generalization across biases and domains, as well as its efficiency.
Cite
Text
Yang et al. "Any Large Language Model Can Be a Reliable Judge: Debiasing with a Reasoning-Based Bias Detector." Advances in Neural Information Processing Systems, 2025.Markdown
[Yang et al. "Any Large Language Model Can Be a Reliable Judge: Debiasing with a Reasoning-Based Bias Detector." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/yang2025neurips-any/)BibTeX
@inproceedings{yang2025neurips-any,
title = {{Any Large Language Model Can Be a Reliable Judge: Debiasing with a Reasoning-Based Bias Detector}},
author = {Yang, Haoyan and Bao, Runxue and Xiao, Cao and Ma, Jun and Bhatia, Parminder and Gao, Shangqian and Kass-Hout, Taha},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/yang2025neurips-any/}
}