RATE: Causal Explainability of Reward Models with Imperfect Counterfactuals
Abstract
Reward models are widely used as proxies for human preferences when aligning or evaluating LLMs. However, reward models are black boxes, and it is often unclear what, exactly, they are actually rewarding. In this paper we develop Rewrite-based Attribute Treatment Estimator (RATE) as an effective method for measuring the sensitivity of a reward model to high-level attributes of responses, such as sentiment, helpfulness, or complexity. Importantly, RATE measures the causal effect of an attribute on the reward. RATE uses LLMs to rewrite responses to produce imperfect counterfactuals examples that can be used to measure causal effects. A key challenge is that these rewrites are imperfect in a manner that can induce substantial bias in the estimated sensitivity of the reward model to the attribute. The core idea of RATE is to adjust for this imperfect-rewrite effect by rewriting twice. We establish the validity of the RATE procedure and show empirically that it is an effective estimator.
Cite
Text
Reber et al. "RATE: Causal Explainability of Reward Models with Imperfect Counterfactuals." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Reber et al. "RATE: Causal Explainability of Reward Models with Imperfect Counterfactuals." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/reber2025icml-rate/)BibTeX
@inproceedings{reber2025icml-rate,
title = {{RATE: Causal Explainability of Reward Models with Imperfect Counterfactuals}},
author = {Reber, David and Richardson, Sean M and Nief, Todd and Garbacea, Cristina and Veitch, Victor},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {51341-51368},
volume = {267},
url = {https://mlanthology.org/icml/2025/reber2025icml-rate/}
}