Provable Zero-Shot Generalization in Offline Reinforcement Learning

Abstract

In this work, we study offline reinforcement learning (RL) with zero-shot generalization property (ZSG), where the agent has access to an offline dataset including experiences from different environments, and the goal of the agent is to train a policy over the training environments which performs well on test environments without further interaction. Existing work showed that classical offline RL fails to generalize to new, unseen environments. We propose pessimistic empirical risk minimization (PERM) and pessimistic proximal policy optimization (PPPO), which leverage pessimistic policy evaluation to guide policy learning and enhance generalization. We show that both PERM and PPPO are capable of finding a near-optimal policy with ZSG. Our result serves as a first step in understanding the foundation of the generalization phenomenon in offline reinforcement learning.

Cite

Text

Wang et al. "Provable Zero-Shot Generalization in Offline Reinforcement Learning." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Wang et al. "Provable Zero-Shot Generalization in Offline Reinforcement Learning." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/wang2025icml-provable/)

BibTeX

@inproceedings{wang2025icml-provable,
  title     = {{Provable Zero-Shot Generalization in Offline Reinforcement Learning}},
  author    = {Wang, Zhiyong and Yang, Chen and Lui, John C.S. and Zhou, Dongruo},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {65122-65143},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/wang2025icml-provable/}
}