Lipschitz Continuity in Model-Based Reinforcement Learning
Abstract
We examine the impact of learning Lipschitz continuous models in the context of model-based reinforcement learning. We provide a novel bound on multi-step prediction error of Lipschitz models where we quantify the error using the Wasserstein metric. We go on to prove an error bound for the value-function estimate arising from Lipschitz models and show that the estimated value function is itself Lipschitz. We conclude with empirical results that show the benefits of controlling the Lipschitz constant of neural-network models.
Cite
Text
Asadi et al. "Lipschitz Continuity in Model-Based Reinforcement Learning." International Conference on Machine Learning, 2018.Markdown
[Asadi et al. "Lipschitz Continuity in Model-Based Reinforcement Learning." International Conference on Machine Learning, 2018.](https://mlanthology.org/icml/2018/asadi2018icml-lipschitz/)BibTeX
@inproceedings{asadi2018icml-lipschitz,
title = {{Lipschitz Continuity in Model-Based Reinforcement Learning}},
author = {Asadi, Kavosh and Misra, Dipendra and Littman, Michael},
booktitle = {International Conference on Machine Learning},
year = {2018},
pages = {264-273},
volume = {80},
url = {https://mlanthology.org/icml/2018/asadi2018icml-lipschitz/}
}