Continuous-Time Model-Based Reinforcement Learning
Abstract
Model-based reinforcement learning (MBRL) approaches rely on discrete-time state transition models whereas physical systems and the vast majority of control tasks operate in continuous-time. To avoid time-discretization approximation of the underlying process, we propose a continuous-time MBRL framework based on a novel actor-critic method. Our approach also infers the unknown state evolution differentials with Bayesian neural ordinary differential equations (ODE) to account for epistemic uncertainty. We implement and test our method on a new ODE-RL suite that explicitly solves continuous-time control systems. Our experiments illustrate that the model is robust against irregular and noisy data, and can solve classic control problems in a sample-efficient manner.
Cite
Text
Yildiz et al. "Continuous-Time Model-Based Reinforcement Learning." International Conference on Machine Learning, 2021.Markdown
[Yildiz et al. "Continuous-Time Model-Based Reinforcement Learning." International Conference on Machine Learning, 2021.](https://mlanthology.org/icml/2021/yildiz2021icml-continuoustime/)BibTeX
@inproceedings{yildiz2021icml-continuoustime,
title = {{Continuous-Time Model-Based Reinforcement Learning}},
author = {Yildiz, Cagatay and Heinonen, Markus and Lähdesmäki, Harri},
booktitle = {International Conference on Machine Learning},
year = {2021},
pages = {12009-12018},
volume = {139},
url = {https://mlanthology.org/icml/2021/yildiz2021icml-continuoustime/}
}