Identifying Policy Gradient Subspaces
Abstract
Policy gradient methods hold great potential for solving complex continuous control tasks. Still, their training efficiency can be improved by exploiting structure within the optimization problem. Recent work indicates that supervised learning can be accelerated by leveraging the fact that gradients lie in a low-dimensional and slowly-changing subspace. In this paper, we conduct a thorough evaluation of this phenomenon for two popular deep policy gradient methods on various simulated benchmark tasks. Our results demonstrate the existence of such gradient subspaces despite the continuously changing data distribution inherent to reinforcement learning. These findings reveal promising directions for future work on more efficient reinforcement learning, e.g., through improving parameter-space exploration or enabling second-order optimization.
Cite
Text
Schneider et al. "Identifying Policy Gradient Subspaces." International Conference on Learning Representations, 2024.Markdown
[Schneider et al. "Identifying Policy Gradient Subspaces." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/schneider2024iclr-identifying/)BibTeX
@inproceedings{schneider2024iclr-identifying,
title = {{Identifying Policy Gradient Subspaces}},
author = {Schneider, Jan and Schumacher, Pierre and Guist, Simon and Chen, Le and Haeufle, Daniel and Schölkopf, Bernhard and Büchler, Dieter},
booktitle = {International Conference on Learning Representations},
year = {2024},
url = {https://mlanthology.org/iclr/2024/schneider2024iclr-identifying/}
}