Sequence to Sequence Reward Modeling: Improving RLHF by Language Feedback

Cite

Text

Zhou et al. "Sequence to Sequence Reward Modeling: Improving RLHF by Language Feedback." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I26.34992

Markdown

[Zhou et al. "Sequence to Sequence Reward Modeling: Improving RLHF by Language Feedback." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/zhou2025aaai-sequence/) doi:10.1609/AAAI.V39I26.34992

BibTeX

@inproceedings{zhou2025aaai-sequence,
  title     = {{Sequence to Sequence Reward Modeling: Improving RLHF by Language Feedback}},
  author    = {Zhou, Jiayi and Ji, Jiaming and Dai, Josef and Yang, Yaodong},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {27765-27773},
  doi       = {10.1609/AAAI.V39I26.34992},
  url       = {https://mlanthology.org/aaai/2025/zhou2025aaai-sequence/}
}