Finite-Time Analysis of Entropy-Regularized Neural Natural Actor-Critic Algorithm
Abstract
Natural actor-critic (NAC) and its variants, equipped with the representation power of neural networks, have demonstrated impressive empirical success in solving Markov decision problems with large (potentially infinite) state spaces. In this paper, we present a finite-time analysis of NAC with neural network approximation, and identify the roles of neural networks, regularization and optimization techniques (e.g., gradient clipping and weight decay) to achieve provably good performance in terms of sample complexity, iteration complexity and overparametrization bounds for the actor and the critic. In particular, we prove that (i) entropy regularization and weight decay ensure stability by providing sufficient exploration to avoid near-deterministic and strictly suboptimal policies and (ii) regularization leads to sharp sample complexity and network width bounds in the regularized MDPs, yielding a favorable bias-variance tradeoff in policy optimization. In the process, we identify the importance of uniform approximation power of the actor neural network to achieve global optimality in policy optimization due to distributional shift.
Cite
Text
Cayci et al. "Finite-Time Analysis of Entropy-Regularized Neural Natural Actor-Critic Algorithm." Transactions on Machine Learning Research, 2024.Markdown
[Cayci et al. "Finite-Time Analysis of Entropy-Regularized Neural Natural Actor-Critic Algorithm." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/cayci2024tmlr-finitetime/)BibTeX
@article{cayci2024tmlr-finitetime,
title = {{Finite-Time Analysis of Entropy-Regularized Neural Natural Actor-Critic Algorithm}},
author = {Cayci, Semih and He, Niao and Srikant, R.},
journal = {Transactions on Machine Learning Research},
year = {2024},
url = {https://mlanthology.org/tmlr/2024/cayci2024tmlr-finitetime/}
}