GenARM: Reward Guided Generation with Autoregressive Reward Model for Test-Time Alignment
Abstract
Large Language Models (LLMs) exhibit impressive capabilities but require careful alignment with human preferences. Traditional training-time methods finetune LLMs using human preference datasets but incur significant training costs and require repeated training to handle diverse user preferences. Test-time alignment methods address this by using reward models (RMs) to guide frozen LLMs without retraining. However, existing test-time approaches rely on trajectory-level RMs which are designed to evaluate complete responses, making them unsuitable for autoregressive text generation that requires computing next-token rewards from partial responses. To address this, we introduce GenARM, a test-time alignment approach that leverages the Autoregressive Reward Model—a novel reward parametrization designed to predict next-token rewards for efficient and effective autoregressive generation. Theoretically, we demonstrate that this parametrization can provably guide frozen LLMs toward any distribution achievable by traditional RMs within the KL-regularized reinforcement learning framework. Experimental results show that GenARM significantly outperforms prior test-time alignment baselines and matches the performance of training-time methods. Additionally, GenARM enables efficient weak-to-strong guidance, aligning larger LLMs with smaller RMs without the high costs of training larger models. Furthermore, GenARM supports multi-objective alignment, allowing real-time trade-offs between preference dimensions and catering to diverse user preferences without retraining. Our project page is available at: https://genarm.github.io.
Cite
Text
Xu et al. "GenARM: Reward Guided Generation with Autoregressive Reward Model for Test-Time Alignment." International Conference on Learning Representations, 2025.Markdown
[Xu et al. "GenARM: Reward Guided Generation with Autoregressive Reward Model for Test-Time Alignment." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/xu2025iclr-genarm/)BibTeX
@inproceedings{xu2025iclr-genarm,
title = {{GenARM: Reward Guided Generation with Autoregressive Reward Model for Test-Time Alignment}},
author = {Xu, Yuancheng and Sehwag, Udari Madhushani and Koppel, Alec and Zhu, Sicheng and An, Bang and Huang, Furong and Ganesh, Sumitra},
booktitle = {International Conference on Learning Representations},
year = {2025},
url = {https://mlanthology.org/iclr/2025/xu2025iclr-genarm/}
}