Operator Splitting Value Iteration

Abstract

We introduce new planning and reinforcement learning algorithms for discounted MDPs that utilize an approximate model of the environment to accelerate the convergence of the value function. Inspired by the splitting approach in numerical linear algebra, we introduce \emph{Operator Splitting Value Iteration} (OS-VI) for both Policy Evaluation and Control problems. OS-VI achieves a much faster convergence rate when the model is accurate enough. We also introduce a sample-based version of the algorithm called OS-Dyna. Unlike the traditional Dyna architecture, OS-Dyna still converges to the correct value function in presence of model approximation error.

Cite

Text

Rakhsha et al. "Operator Splitting Value Iteration." Neural Information Processing Systems, 2022.

Markdown

[Rakhsha et al. "Operator Splitting Value Iteration." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/rakhsha2022neurips-operator/)

BibTeX

@inproceedings{rakhsha2022neurips-operator,
  title     = {{Operator Splitting Value Iteration}},
  author    = {Rakhsha, Amin and Wang, Andrew and Ghavamzadeh, Mohammad and Farahmand, Amir-massoud},
  booktitle = {Neural Information Processing Systems},
  year      = {2022},
  url       = {https://mlanthology.org/neurips/2022/rakhsha2022neurips-operator/}
}