Recurrent Natural Policy Gradient for POMDPs

Abstract

In this paper, we study a natural policy gradient method based on recurrent neural networks (RNNs) for partially-observable Markov decision processes, whereby RNNs are used for policy parameterization and policy evaluation to address curse of dimensionality in non-Markovian reinforcement learning. We present finite-time and finite-width analyses for both the critic (recurrent temporal difference learning), and correspondingly-operated recurrent natural policy gradient method in the near-initialization regime. Our analysis demonstrates the efficiency of RNNs for problems with short-term memory with explicit bounds on the required network widths and sample complexity, and points out the challenges in the case of long-term dependencies.

Cite

Text

Cayci and Eryilmaz. "Recurrent Natural Policy Gradient for POMDPs." ICML 2024 Workshops: RLControlTheory, 2024.

Markdown

[Cayci and Eryilmaz. "Recurrent Natural Policy Gradient for POMDPs." ICML 2024 Workshops: RLControlTheory, 2024.](https://mlanthology.org/icmlw/2024/cayci2024icmlw-recurrent/)

BibTeX

@inproceedings{cayci2024icmlw-recurrent,
  title     = {{Recurrent Natural Policy Gradient for POMDPs}},
  author    = {Cayci, Semih and Eryilmaz, Atilla},
  booktitle = {ICML 2024 Workshops: RLControlTheory},
  year      = {2024},
  url       = {https://mlanthology.org/icmlw/2024/cayci2024icmlw-recurrent/}
}