Iterative Nash Policy Optimization: Aligning LLMs with General Preferences via No-Regret Learning

Abstract

Reinforcement Learning with Human Feedback (RLHF) has achieved great success in aligning large language models (LLMs) with human preferences. Prevalent RLHF approaches are reward-based, following the Bradley-Terry (BT) model assumption, which may not fully capture the complexity of human preferences. In this paper, we explore RLHF under a general preference framework and approach it from a game-theoretic perspective. Specifically, we formulate the problem as a two-player game and propose a novel online algorithm, iterative Nash policy optimization (INPO). The key idea is to let the policy play against itself via no- regret learning, thereby approximating the Nash policy. Unlike previous methods, INPO bypasses the need for estimating the expected win rate for individual responses, which typically incurs high computational or annotation costs. Instead, we introduce a new loss objective that is directly minimized over a preference dataset. We provide theoretical analysis for our approach and demonstrate its effectiveness through experiments on various representative benchmarks. With an LLaMA-3-8B-based SFT model, INPO achieves a 42.6% length-controlled win rate on AlpacaEval 2.0 and a 37.8% win rate on Arena-Hard, showing substantial improvement over the state-of-the-art online RLHF algorithms.

Cite

Text

Zhang et al. "Iterative Nash Policy Optimization: Aligning LLMs with General Preferences via No-Regret Learning." International Conference on Learning Representations, 2025.

Markdown

[Zhang et al. "Iterative Nash Policy Optimization: Aligning LLMs with General Preferences via No-Regret Learning." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/zhang2025iclr-iterative/)

BibTeX

@inproceedings{zhang2025iclr-iterative,
  title     = {{Iterative Nash Policy Optimization: Aligning LLMs with General Preferences via No-Regret Learning}},
  author    = {Zhang, Yuheng and Yu, Dian and Peng, Baolin and Song, Linfeng and Tian, Ye and Huo, Mingyue and Jiang, Nan and Mi, Haitao and Yu, Dong},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/zhang2025iclr-iterative/}
}