Deflated Dynamics Value Iteration

Abstract

The Value Iteration (VI) algorithm is an iterative procedure to compute the value function of a Markov decision process, and is the basis of many reinforcement learning (RL) algorithms as well. As the error convergence rate of VI as a function of iteration $k$ is $O(\gamma^k)$, it is slow when the discount factor $\gamma$ is close to $1$. To accelerate the computation of the value function, we propose Deflated Dynamics Value Iteration (DDVI). DDVI uses matrix splitting and matrix deflation techniques to effectively remove (deflate) the top $s$ dominant eigen-structure of the transition matrix $\mathcal{P}^\pi$. We prove that this leads to a $\tilde{O}(\gamma^k |\lambda_{s+1}|^k)$ convergence rate, where $\lambda_{s+1}$ is the $(s+1)$-th largest eigenvalue of the dynamics matrix. We also extend DDVI to the RL setting and present Deflated Dynamics Temporal Difference (DDTD) algorithm. We empirically show the effectiveness of the proposed algorithms.

Cite

Text

Lee et al. "Deflated Dynamics Value Iteration." Transactions on Machine Learning Research, 2025.

Markdown

[Lee et al. "Deflated Dynamics Value Iteration." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/lee2025tmlr-deflated/)

BibTeX

@article{lee2025tmlr-deflated,
  title     = {{Deflated Dynamics Value Iteration}},
  author    = {Lee, Jongmin and Rakhsha, Amin and Ryu, Ernest K. and Farahmand, Amir-massoud},
  journal   = {Transactions on Machine Learning Research},
  year      = {2025},
  url       = {https://mlanthology.org/tmlr/2025/lee2025tmlr-deflated/}
}