Scaling Offline RL via Efficient and Expressive Shortcut Models

Abstract

Diffusion and flow models have emerged as powerful generative approaches capable of modeling diverse and multimodal behavior. However, applying these models to offline RL remains challenging due to the iterative nature of their noise sampling processes, making policy optimization difficult. In this paper, we introduce Scalable Offline Reinforcement Learning (SORL), a new offline RL algorithm that leverages shortcut models – a novel class of generative models – to scale both training and inference. SORL's policy can capture complex data distributions and can be trained simply and efficiently in a one-stage training procedure. At test time, SORL supports both sequential and parallel inference scaling by using the learned Q-function as a verifier. We demonstrate that SORL achieves strong performance across a range of offline RL tasks and exhibits positive scaling behavior with increased test-time compute.

Cite

Text

Espinosa-Dice et al. "Scaling Offline RL via Efficient and Expressive Shortcut Models." Advances in Neural Information Processing Systems, 2025.

Markdown

[Espinosa-Dice et al. "Scaling Offline RL via Efficient and Expressive Shortcut Models." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/espinosadice2025neurips-scaling/)

BibTeX

@inproceedings{espinosadice2025neurips-scaling,
  title     = {{Scaling Offline RL via Efficient and Expressive Shortcut Models}},
  author    = {Espinosa-Dice, Nicolas and Zhang, Yiyi and Chen, Yiding and Guo, Bradley and Oertell, Owen and Swamy, Gokul and Brantley, Kianté and Sun, Wen},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/espinosadice2025neurips-scaling/}
}