Ordering-Based Conditions for Global Convergence of Policy Gradient Methods

Abstract

We prove that, for finite-arm bandits with linear function approximation, the global convergence of policy gradient (PG) methods depends on inter-related properties between the policy update and the representation. textcolor{blue}First, we establish a few key observations that frame the study: \textbf{(i)} Global convergence can be achieved under linear function approximation without policy or reward realizability, both for the standard Softmax PG and natural policy gradient (NPG). \textbf{(ii)} Approximation error is not a key quantity for characterizing global convergence in either algorithm. \textbf{(iii)} The conditions on the representation that imply global convergence are different between these two algorithms. Overall, these observations call into question approximation error as an appropriate quantity for characterizing the global convergence of PG methods under linear function approximation. \textcolor{blue}Second, motivated by these observations, we establish new general results: \textbf{(i)} NPG with linear function approximation achieves global convergence \emph{if and only if} the projection of the reward onto the representable space preserves the optimal action's rank, a quantity that is not strongly related to approximation error. \textbf{(ii)} The global convergence of Softmax PG occurs if the representation satisfies a non-domination condition and can preserve the ranking of rewards, which goes well beyond policy or reward realizability. We provide experimental results to support these theoretical findings.

Cite

Text

Mei et al. "Ordering-Based Conditions for Global Convergence of Policy Gradient Methods." Neural Information Processing Systems, 2023.

Markdown

[Mei et al. "Ordering-Based Conditions for Global Convergence of Policy Gradient Methods." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/mei2023neurips-orderingbased/)

BibTeX

@inproceedings{mei2023neurips-orderingbased,
  title     = {{Ordering-Based Conditions for Global Convergence of Policy Gradient Methods}},
  author    = {Mei, Jincheng and Dai, Bo and Agarwal, Alekh and Ghavamzadeh, Mohammad and Szepesvari, Csaba and Schuurmans, Dale},
  booktitle = {Neural Information Processing Systems},
  year      = {2023},
  url       = {https://mlanthology.org/neurips/2023/mei2023neurips-orderingbased/}
}