Imitating Language via Scalable Inverse Reinforcement Learning
Abstract
The majority of language model training builds on imitation learning. It covers pretraining, supervised fine-tuning, and affects the starting conditions for reinforcement learning from human feedback (RLHF). The simplicity and scalability of maximum likelihood estimation (MLE) for next token prediction led to its role as predominant paradigm. However, the broader field of imitation learning can more effectively utilize the sequential structure underlying autoregressive generation. We focus on investigating the inverse reinforcement learning (IRL) perspective to imitation, extracting rewards and directly optimizing sequences instead of individual token likelihoods and evaluate its benefits for fine-tuning large language models. We provide a new angle, reformulating inverse soft-Q-learning as a temporal difference regularized extension of MLE. This creates a principled connection between MLE and IRL and allows trading off added complexity with increased performance and diversity of generations in the supervised fine-tuning (SFT) setting. We find clear advantages for IRL-based imitation, in particular for retaining diversity while maximizing task performance, rendering IRL a strong alternative on fixed SFT datasets even without online data generation. Our analysis of IRL-extracted reward functions further indicates benefits for more robust reward functions via tighter integration of supervised and preference-based LLM post-training.
Cite
Text
Wulfmeier et al. "Imitating Language via Scalable Inverse Reinforcement Learning." Neural Information Processing Systems, 2024. doi:10.52202/079017-2880Markdown
[Wulfmeier et al. "Imitating Language via Scalable Inverse Reinforcement Learning." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/wulfmeier2024neurips-imitating/) doi:10.52202/079017-2880BibTeX
@inproceedings{wulfmeier2024neurips-imitating,
title = {{Imitating Language via Scalable Inverse Reinforcement Learning}},
author = {Wulfmeier, Markus and Bloesch, Michael and Vieillard, Nino and Ahuja, Arun and Bornschein, Jörg and Huang, Sandy and Sokolov, Artem and Barnes, Matt and Desjardins, Guillaume and Bewley, Alex and Bechtle, Sarah Maria Elisabeth and Springenberg, Jost Tobias and Momchev, Nikola and Bachem, Olivier and Geist, Matthieu and Riedmiller, Martin},
booktitle = {Neural Information Processing Systems},
year = {2024},
doi = {10.52202/079017-2880},
url = {https://mlanthology.org/neurips/2024/wulfmeier2024neurips-imitating/}
}