Style Outweighs Substance: Failure Modes of LLM Judges in Alignment Benchmarking
Abstract
The release of ChatGPT in November 2022 sparked an explosion of interest in post-training and an avalanche of new preference optimization (PO) methods. These methods claim superior alignment by virtue of better correspondence with human pairwise preferences, often measured by LLM-judges. In this work, we attempt to answer the following question -- do LLM-judge preferences translate to progress on other, more concrete metrics for alignment, and if not, why not? We define a concrete metric for alignment, and introduce SOS-Bench (Substance Outweighs Style Benchmark), the largest standardized, reproducible LLM meta-benchmark to date. We find that (1) LLM-judge preferences do not correlate with concrete measures of safety, world knowledge, and instruction following; (2) LLM-judges have powerful implicit biases, prioritizing style over factuality and safety; and (3) the supervised fine-tuning (SFT) stage of post-training has a large impact on alignment, with data scaling and prompt diversity as the driving factors.
Cite
Text
Feuer et al. "Style Outweighs Substance: Failure Modes of LLM Judges in Alignment Benchmarking." International Conference on Learning Representations, 2025.Markdown
[Feuer et al. "Style Outweighs Substance: Failure Modes of LLM Judges in Alignment Benchmarking." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/feuer2025iclr-style/)BibTeX
@inproceedings{feuer2025iclr-style,
title = {{Style Outweighs Substance: Failure Modes of LLM Judges in Alignment Benchmarking}},
author = {Feuer, Benjamin and Goldblum, Micah and Datta, Teresa and Nambiar, Sanjana and Besaleli, Raz and Dooley, Samuel and Cembalest, Max and Dickerson, John P},
booktitle = {International Conference on Learning Representations},
year = {2025},
url = {https://mlanthology.org/iclr/2025/feuer2025iclr-style/}
}