Leveraging Mutual Information for Asymmetric Learning Under Partial Observability
Abstract
Even though partial observability is prevalent in robotics, most reinforcement learning studies avoid it due to the difficulty of learning a policy that can efficiently memorize past events and seek information. Fortunately, in many cases, learning can be done in an asymmetric setting where states are available during training but not during execution. Prior studies often leverage the state to indirectly influence the training of a history-based actor (actor-critic methods) or a history-based critic (value-based methods). Instead, we propose using state-observation and state-history mutual information to improve the agent’s architecture and ability to seek information and memorize efficiently through intrinsic rewards and an auxiliary task. Our method outperforms strong baselines through extensive experiments and achieves successful sim-to-real transfers to a real robot.
Cite
Text
Nguyen et al. "Leveraging Mutual Information for Asymmetric Learning Under Partial Observability." Proceedings of The 8th Conference on Robot Learning, 2024.Markdown
[Nguyen et al. "Leveraging Mutual Information for Asymmetric Learning Under Partial Observability." Proceedings of The 8th Conference on Robot Learning, 2024.](https://mlanthology.org/corl/2024/nguyen2024corl-leveraging/)BibTeX
@inproceedings{nguyen2024corl-leveraging,
title = {{Leveraging Mutual Information for Asymmetric Learning Under Partial Observability}},
author = {Nguyen, Hai Huu and Van The, Long Dinh and Amato, Christopher and Platt, Robert},
booktitle = {Proceedings of The 8th Conference on Robot Learning},
year = {2024},
pages = {4546-4572},
volume = {270},
url = {https://mlanthology.org/corl/2024/nguyen2024corl-leveraging/}
}