Equivariant Reinforcement Learning Under Partial Observability
Abstract
Incorporating inductive biases is a promising approach for tackling challenging robot learning domains with sample-efficient solutions. This paper identifies partially observable domains where symmetries can be a useful inductive bias for efficient learning. Specifically, by encoding the equivariance regarding specific group symmetries into the neural networks, our actor-critic reinforcement learning agents can reuse solutions in the past for related scenarios. Consequently, our equivariant agents outperform non-equivariant approaches significantly in terms of sample efficiency and final performance, demonstrated through experiments on a range of robotic tasks in simulation and real hardware.
Cite
Text
Nguyen et al. "Equivariant Reinforcement Learning Under Partial Observability." Conference on Robot Learning, 2023.Markdown
[Nguyen et al. "Equivariant Reinforcement Learning Under Partial Observability." Conference on Robot Learning, 2023.](https://mlanthology.org/corl/2023/nguyen2023corl-equivariant/)BibTeX
@inproceedings{nguyen2023corl-equivariant,
title = {{Equivariant Reinforcement Learning Under Partial Observability}},
author = {Nguyen, Hai Huu and Baisero, Andrea and Klee, David and Wang, Dian and Platt, Robert and Amato, Christopher},
booktitle = {Conference on Robot Learning},
year = {2023},
pages = {3309-3320},
volume = {229},
url = {https://mlanthology.org/corl/2023/nguyen2023corl-equivariant/}
}