Model-Based RL as a Minimalist Approach to Horizon-Free and Second-Order Bounds

Abstract

Learning a transition model via Maximum Likelihood Estimation (MLE) followed by planning inside the learned model is perhaps the most standard and simplest Model-based Reinforcement Learning (RL) framework. In this work, we show that such a simple Model-based RL scheme, when equipped with optimistic and pessimistic planning procedures, achieves strong regret and sample complexity bounds in online and offline RL settings. Particularly, we demonstrate that under the conditions where the trajectory-wise reward is normalized between zero and one and the transition is time-homogenous, it achieves nearly horizon-free and second-order bounds.

Cite

Text

Wang et al. "Model-Based RL as a Minimalist Approach to Horizon-Free and Second-Order Bounds." International Conference on Learning Representations, 2025.

Markdown

[Wang et al. "Model-Based RL as a Minimalist Approach to Horizon-Free and Second-Order Bounds." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/wang2025iclr-modelbased/)

BibTeX

@inproceedings{wang2025iclr-modelbased,
  title     = {{Model-Based RL as a Minimalist Approach to Horizon-Free and Second-Order Bounds}},
  author    = {Wang, Zhiyong and Zhou, Dongruo and Lui, John C.S. and Sun, Wen},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/wang2025iclr-modelbased/}
}