Environment Agnostic Representation for Visual Reinforcement Learning
Abstract
Generalization capability of vision-based deep reinforcement learning (RL) is indispensable to deal with dynamic environment changes that exist in visual observations. The high-dimensional space of the visual input, however, imposes challenges in adapting an agent to unseen environments. In this work, we propose Environment Agnostic Reinforcement learning (EAR), which is a compact framework for domain generalization of the visual deep RL. Environment-agnostic features (EAFs) are extracted by leveraging three novel objectives based on feature factorization, reconstruction, and episode-aware state shifting, so that policy learning is accomplished only with vital features. EAR is a simple single-stage method with a low model complexity and a fast inference time, ensuring a high reproducibility, while attaining state-of-the-art performance in the DeepMind Control Suite and DrawerWorld benchmarks.
Cite
Text
Choi et al. "Environment Agnostic Representation for Visual Reinforcement Learning." International Conference on Computer Vision, 2023. doi:10.1109/ICCV51070.2023.00031Markdown
[Choi et al. "Environment Agnostic Representation for Visual Reinforcement Learning." International Conference on Computer Vision, 2023.](https://mlanthology.org/iccv/2023/choi2023iccv-environment/) doi:10.1109/ICCV51070.2023.00031BibTeX
@inproceedings{choi2023iccv-environment,
title = {{Environment Agnostic Representation for Visual Reinforcement Learning}},
author = {Choi, Hyesong and Lee, Hunsang and Jeong, Seongwon and Min, Dongbo},
booktitle = {International Conference on Computer Vision},
year = {2023},
pages = {263-273},
doi = {10.1109/ICCV51070.2023.00031},
url = {https://mlanthology.org/iccv/2023/choi2023iccv-environment/}
}