Evaluating the Evaluator: Measuring LLMs' Adherence to Task Evaluation Instructions
Abstract
LLMs-as-a-judge is a recently popularized method which replaces human judgements in task evaluation with automatic evaluation using LLMs. Due to widespread use of RLHF (Reinforcement Learning from Human Feedback), state-of-the-art LLMs like GPT4 and Llama3 are expected to have strong alignment with human preferences when prompted for a quality judgement, such as the coherence of a text. While this seems beneficial, it is not clear whether the assessments by an LLM-as-a-judge constitute only an evaluation based on the instructions in the prompts, or reflect its preference for high-quality data similar to its fine-tune data. To investigate how much influence prompting the LLMs-as-a-judge has on the alignment of AI judgements to human judgements, we analyze prompts with increasing levels of instructions about the target quality of an evaluation, for several LLMs-as-a-judge. Further, we compare to a prompt-free method using model perplexity as a quality measure instead. We aggregate a taxonomy of quality criteria commonly used across state-of-the-art evaluations with LLMs and provide this as a rigorous benchmark of models as judges. Overall, we show that the LLMs-as-a-judge benefit only little from highly detailed instructions in prompts and that perplexity can sometimes align better with human judgements than prompting, especially on textual quality.
Cite
Text
Murugadoss et al. "Evaluating the Evaluator: Measuring LLMs' Adherence to Task Evaluation Instructions." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I18.34157Markdown
[Murugadoss et al. "Evaluating the Evaluator: Measuring LLMs' Adherence to Task Evaluation Instructions." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/murugadoss2025aaai-evaluating/) doi:10.1609/AAAI.V39I18.34157BibTeX
@inproceedings{murugadoss2025aaai-evaluating,
title = {{Evaluating the Evaluator: Measuring LLMs' Adherence to Task Evaluation Instructions}},
author = {Murugadoss, Bhuvanashree and Pölitz, Christian and Drosos, Ian and Le, Vu and McKenna, Nick and Negreanu, Carina Suzana and Parnin, Chris and Sarkar, Advait},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2025},
pages = {19589-19597},
doi = {10.1609/AAAI.V39I18.34157},
url = {https://mlanthology.org/aaai/2025/murugadoss2025aaai-evaluating/}
}