DNA: Proximal Policy Optimization with a Dual Network Architecture

Abstract

This paper explores the problem of simultaneously learning a value function and policy in deep actor-critic reinforcement learning models. We find that the common practice of learning these functions jointly is sub-optimal due to an order-of-magnitude difference in noise levels between the two tasks. Instead, we show that learning these tasks independently, but with a constrained distillation phase, significantly improves performance. Furthermore, we find that policy gradient noise levels decrease when using a lower \textit{variance} return estimate. Whereas, value learning noise level decreases with a lower \textit{bias} estimate. Together these insights inform an extension to Proximal Policy Optimization we call \textit{Dual Network Architecture} (DNA), which significantly outperforms its predecessor. DNA also exceeds the performance of the popular Rainbow DQN algorithm on four of the five environments tested, even under more difficult stochastic control settings.

Cite

Text

Aitchison and Sweetser. "DNA: Proximal Policy Optimization with a Dual Network Architecture." Neural Information Processing Systems, 2022.

Markdown

[Aitchison and Sweetser. "DNA: Proximal Policy Optimization with a Dual Network Architecture." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/aitchison2022neurips-dna/)

BibTeX

@inproceedings{aitchison2022neurips-dna,
  title     = {{DNA: Proximal Policy Optimization with a Dual Network Architecture}},
  author    = {Aitchison, Matthew and Sweetser, Penny},
  booktitle = {Neural Information Processing Systems},
  year      = {2022},
  url       = {https://mlanthology.org/neurips/2022/aitchison2022neurips-dna/}
}