Taming "data-Hungry" Reinforcement Learning? Stability in Continuous State-Action Spaces

Abstract

We introduce a novel framework for analyzing reinforcement learning (RL) in continuous state-action spaces, and use it to prove fast rates of convergence in both off-line and on-line settings. Our analysis highlights two key stability properties, relating to how changes in value functions and/or policies affect the Bellman operator and occupation measures. We argue that these properties are satisfied in many continuous state-action Markov decision processes. Our analysis also offers fresh perspectives on the roles of pessimism and optimism in off-line and on-line RL.

Cite

Text

Duan and Wainwright. "Taming "data-Hungry" Reinforcement Learning? Stability in Continuous State-Action Spaces." Neural Information Processing Systems, 2024. doi:10.52202/079017-2279

Markdown

[Duan and Wainwright. "Taming "data-Hungry" Reinforcement Learning? Stability in Continuous State-Action Spaces." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/duan2024neurips-taming/) doi:10.52202/079017-2279

BibTeX

@inproceedings{duan2024neurips-taming,
  title     = {{Taming "data-Hungry" Reinforcement Learning? Stability in Continuous State-Action Spaces}},
  author    = {Duan, Yaqi and Wainwright, Martin J.},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-2279},
  url       = {https://mlanthology.org/neurips/2024/duan2024neurips-taming/}
}