Low-Rank Representation of Reinforcement Learning Policies
Abstract
We propose a general framework for policy representation for reinforcement learning tasks. This framework involves finding a low-dimensional embedding of the policy on a reproducing kernel Hilbert space (RKHS). The usage of RKHS based methods allows us to derive strong theoretical guarantees on the expected return of the reconstructed policy. Such guarantees are typically lacking in black-box models, but are very desirable in tasks requiring stability and convergence guarantees. We conduct several experiments on classic RL domains. The results confirm that the policies can be robustly represented in a low-dimensional space while the embedded policy incurs almost no decrease in returns.
Cite
Text
Mazoure et al. "Low-Rank Representation of Reinforcement Learning Policies." Journal of Artificial Intelligence Research, 2022. doi:10.1613/JAIR.1.13854Markdown
[Mazoure et al. "Low-Rank Representation of Reinforcement Learning Policies." Journal of Artificial Intelligence Research, 2022.](https://mlanthology.org/jair/2022/mazoure2022jair-lowrank/) doi:10.1613/JAIR.1.13854BibTeX
@article{mazoure2022jair-lowrank,
title = {{Low-Rank Representation of Reinforcement Learning Policies}},
author = {Mazoure, Bogdan and Doan, Thang and Li, Tianyu and Makarenkov, Vladimir and Pineau, Joelle and Precup, Doina and Rabusseau, Guillaume},
journal = {Journal of Artificial Intelligence Research},
year = {2022},
pages = {597-636},
doi = {10.1613/JAIR.1.13854},
volume = {75},
url = {https://mlanthology.org/jair/2022/mazoure2022jair-lowrank/}
}