Convergence of Policy Mirror Descent Beyond Compatible Function Approximation
Abstract
Modern policy optimization methods roughly follow the policy mirror descent (PMD) algorithmic template, for which there are by now numerous theoretical convergence results. However, most of these either target tabular environments, or can be applied effectively only when the class of policies being optimized over satisfies strong closure conditions, which is typically not the case when working with parametric policy classes in large-scale environments. In this work, we develop a theoretical framework for PMD for general policy classes where we replace the closure conditions with a generally weaker variational gradient dominance assumption, and obtain upper bounds on the rate of convergence to the best-in-class policy. Our main result leverages a novel notion of smoothness with respect to a local norm induced by the occupancy measure of the current policy, and casts PMD as a particular instance of smooth non-convex optimization in non-Euclidean space.
Cite
Text
Sherman et al. "Convergence of Policy Mirror Descent Beyond Compatible Function Approximation." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Sherman et al. "Convergence of Policy Mirror Descent Beyond Compatible Function Approximation." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/sherman2025icml-convergence/)BibTeX
@inproceedings{sherman2025icml-convergence,
title = {{Convergence of Policy Mirror Descent Beyond Compatible Function Approximation}},
author = {Sherman, Uri and Koren, Tomer and Mansour, Yishay},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {54825-54863},
volume = {267},
url = {https://mlanthology.org/icml/2025/sherman2025icml-convergence/}
}