Preference-Based Reinforcement Learning with Finite-Time Guarantees

Abstract

Preference-based Reinforcement Learning (PbRL) replaces reward values in traditional reinforcement learning by preferences to better elicit human opinion on the target objective, especially when numerical reward values are hard to design or interpret. Despite promising results in applications, the theoretical understanding of PbRL is still in its infancy. In this paper, we present the first finite-time analysis for general PbRL problems. We first show that a unique optimal policy may not exist if preferences over trajectories are deterministic for PbRL. If preferences are stochastic, and the preference probability relates to the hidden reward values, we present algorithms for PbRL, both with and without a simulator, that are able to identify the best policy up to accuracy $\varepsilon$ with high probability. Our method explores the state space by navigating to under-explored states, and solves PbRL using a combination of dueling bandits and policy search. Experiments show the efficacy of our method when it is applied to real-world problems.

Cite

Text

Xu et al. "Preference-Based Reinforcement Learning with Finite-Time Guarantees." Neural Information Processing Systems, 2020.

Markdown

[Xu et al. "Preference-Based Reinforcement Learning with Finite-Time Guarantees." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/xu2020neurips-preferencebased/)

BibTeX

@inproceedings{xu2020neurips-preferencebased,
  title     = {{Preference-Based Reinforcement Learning with Finite-Time Guarantees}},
  author    = {Xu, Yichong and Wang, Ruosong and Yang, Lin and Singh, Aarti and Dubrawski, Artur},
  booktitle = {Neural Information Processing Systems},
  year      = {2020},
  url       = {https://mlanthology.org/neurips/2020/xu2020neurips-preferencebased/}
}