A Theoretical Framework for Partially-Observed Reward States in RLHF

Abstract

The growing deployment of reinforcement learning from human feedback (RLHF) calls for a deeper theoretical investigation of its underlying models. The prevalent models of RLHF do not account for neuroscience-backed, partially-observed "internal states'' that can affect human feedback, nor do they accommodate intermediate feedback during an interaction. Both of these can be instrumental in speeding up learning and improving alignment. To address these limitations, we model RLHF as reinforcement learning with partially observed reward-states (PORRL). We accommodate two kinds of feedback — cardinal and dueling feedback. We first demonstrate that PORRL subsumes a wide class of RL problems, including traditional RL, RLHF, and reward machines. For cardinal feedback, we present two model-based methods (POR-UCRL, POR-UCBVI). We give both cardinal regret and sample complexity guarantees for the methods, showing that they improve over naive history-summarization. We then discuss the benefits of a model-free method like GOLF with naive history-summarization in settings with recursive internal states and dense intermediate feedback. For this purpose, we define a new history aware version of the Bellman-eluder dimension and give a new guarantee for GOLF in our setting, which can be exponentially sharper in illustrative examples. For dueling feedback, we show that a naive reduction to cardinal feedback fails to achieve sublinear dueling regret. We then present the first explicit reduction that converts guarantees for cardinal regret to dueling regret. In both feedback settings, we show that our models and guarantees generalize and extend existing ones.

Cite

Text

Kausik et al. "A Theoretical Framework for Partially-Observed Reward States in RLHF." International Conference on Learning Representations, 2025.

Markdown

[Kausik et al. "A Theoretical Framework for Partially-Observed Reward States in RLHF." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/kausik2025iclr-theoretical/)

BibTeX

@inproceedings{kausik2025iclr-theoretical,
  title     = {{A Theoretical Framework for Partially-Observed Reward States in RLHF}},
  author    = {Kausik, Chinmaya and Mutti, Mirco and Pacchiano, Aldo and Tewari, Ambuj},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/kausik2025iclr-theoretical/}
}