Hamilton-Jacobi-Bellman Equations for Q-Learning in Continuous Time

Abstract

In this paper, we introduce Hamilton-Jacobi-Bellman (HJB) equations for Q-functions in continuous time optimal control problems with Lipschitz continuous controls. The standard Q-function used in reinforcement learning is shown to be the unique viscosity solution of the HJB equation. A necessary and sufficient condition for optimality is provided using the viscosity solution framework. By using the HJB equation, we develop a Q-learning method for continuous-time dynamical systems. A DQN-like algorithm is also proposed for high-dimensional state and control spaces. The performance of the proposed Q-learning algorithm is demonstrated using 1-, 10- and 20-dimensional dynamical systems.

Cite

Text

Kim and Yang. "Hamilton-Jacobi-Bellman Equations for Q-Learning in Continuous Time." Proceedings of the 2nd Conference on Learning for Dynamics and Control, 2020.

Markdown

[Kim and Yang. "Hamilton-Jacobi-Bellman Equations for Q-Learning in Continuous Time." Proceedings of the 2nd Conference on Learning for Dynamics and Control, 2020.](https://mlanthology.org/l4dc/2020/kim2020l4dc-hamiltonjacobibellman/)

BibTeX

@inproceedings{kim2020l4dc-hamiltonjacobibellman,
  title     = {{Hamilton-Jacobi-Bellman Equations for Q-Learning in Continuous Time}},
  author    = {Kim, Jeongho and Yang, Insoon},
  booktitle = {Proceedings of the 2nd Conference on Learning for Dynamics and Control},
  year      = {2020},
  pages     = {739-748},
  volume    = {120},
  url       = {https://mlanthology.org/l4dc/2020/kim2020l4dc-hamiltonjacobibellman/}
}