Learning to Actively Reduce Memory Requirements for Robot Control Tasks
Abstract
Robots equipped with rich sensing modalities (e.g., RGB-D cameras) performing long-horizon tasks motivate the need for policies that are highly memory-efficient. State-of-the-art approaches for controlling robots often use memory representations that are excessively rich for the task or rely on handcrafted tricks for memory efficiency. Instead, this work provides a general approach for jointly synthesizing memory representations and policies; the resulting policies actively seek to reduce memory requirements. Specifically, we present a reinforcement learning framework that leverages an implementation of the group LASSO regularization to synthesize policies that employ low-dimensional and task-centric memory representations. We demonstrate the efficacy of our approach with simulated examples including navigation in discrete and continuous spaces as well as vision-based indoor navigation set in a photo-realistic simulator. The results on these examples indicate that our method is capable of finding policies that rely only on low-dimensional memory representations, improving generalization, and actively reducing memory requirements.
Cite
Text
Booker and Majumdar. "Learning to Actively Reduce Memory Requirements for Robot Control Tasks." Proceedings of the 3rd Conference on Learning for Dynamics and Control, 2021.Markdown
[Booker and Majumdar. "Learning to Actively Reduce Memory Requirements for Robot Control Tasks." Proceedings of the 3rd Conference on Learning for Dynamics and Control, 2021.](https://mlanthology.org/l4dc/2021/booker2021l4dc-learning/)BibTeX
@inproceedings{booker2021l4dc-learning,
title = {{Learning to Actively Reduce Memory Requirements for Robot Control Tasks}},
author = {Booker, Meghan and Majumdar, Anirudha},
booktitle = {Proceedings of the 3rd Conference on Learning for Dynamics and Control},
year = {2021},
pages = {125-137},
volume = {144},
url = {https://mlanthology.org/l4dc/2021/booker2021l4dc-learning/}
}