CLARIFY: Contrastive Preference Reinforcement Learning for Untangling Ambiguous Queries

Abstract

Preference-based reinforcement learning (PbRL) bypasses explicit reward engineering by inferring reward functions from human preference comparisons, enabling better alignment with human intentions. However, humans often struggle to label a clear preference between similar segments, reducing label efficiency and limiting PbRL’s real-world applicability. To address this, we propose an offline PbRL method: Contrastive LeArning for ResolvIng Ambiguous Feedback (CLARIFY), which learns a trajectory embedding space that incorporates preference information, ensuring clearly distinguished segments are spaced apart, thus facilitating the selection of more unambiguous queries. Extensive experiments demonstrate that CLARIFY outperforms baselines in both non-ideal teachers and real human feedback settings. Our approach not only selects more distinguished queries but also learns meaningful trajectory embeddings.

Cite

Text

Mu et al. "CLARIFY: Contrastive Preference Reinforcement Learning for Untangling Ambiguous Queries." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Mu et al. "CLARIFY: Contrastive Preference Reinforcement Learning for Untangling Ambiguous Queries." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/mu2025icml-clarify/)

BibTeX

@inproceedings{mu2025icml-clarify,
  title     = {{CLARIFY: Contrastive Preference Reinforcement Learning for Untangling Ambiguous Queries}},
  author    = {Mu, Ni and Hu, Hao and Hu, Xiao and Yang, Yiqin and Xu, Bo and Jia, Qing-Shan},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {45050-45068},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/mu2025icml-clarify/}
}