Coarse-Grained Smoothness for Reinforcement Learning in Metric Spaces
Abstract
Principled decision-making in continuous state–action spaces is impossible without some assumptions. A common approach is to assume Lipschitz continuity of the Q-function. We show that, unfortunately, this property fails to hold in many typical domains. We propose a new coarse-grained smoothness definition that generalizes the notion of Lipschitz continuity, is more widely applicable, and allows us to compute significantly tighter bounds on Q-functions, leading to improved learning. We provide a theoretical analysis of our new smoothness definition, and discuss its implications and impact on control and exploration in continuous domains.
Cite
Text
Gottesman et al. "Coarse-Grained Smoothness for Reinforcement Learning in Metric Spaces." Artificial Intelligence and Statistics, 2023.Markdown
[Gottesman et al. "Coarse-Grained Smoothness for Reinforcement Learning in Metric Spaces." Artificial Intelligence and Statistics, 2023.](https://mlanthology.org/aistats/2023/gottesman2023aistats-coarsegrained/)BibTeX
@inproceedings{gottesman2023aistats-coarsegrained,
title = {{Coarse-Grained Smoothness for Reinforcement Learning in Metric Spaces}},
author = {Gottesman, Omer and Asadi, Kavosh and Allen, Cameron S. and Lobel, Samuel and Konidaris, George and Littman, Michael},
booktitle = {Artificial Intelligence and Statistics},
year = {2023},
pages = {1390-1410},
volume = {206},
url = {https://mlanthology.org/aistats/2023/gottesman2023aistats-coarsegrained/}
}