Visual Transfer for Reinforcement Learning via Wasserstein Domain Confusion
Abstract
We introduce Wasserstein Adversarial Proximal Policy Optimization (WAPPO), a novel algorithm for visual transfer in Reinforcement Learning that explicitly learns to align the distributions of extracted features between a source and target task. WAPPO approximates and minimizes the Wasserstein-1 distance between the distributions of features from source and target domains via a novel Wasserstein Confusion objective. WAPPO outperforms the prior state-of-the-art in visual transfer and successfully transfers policies across Visual Cartpole and both the easy and hard settings of of 16 OpenAI Procgen environments.
Cite
Text
Roy and Konidaris. "Visual Transfer for Reinforcement Learning via Wasserstein Domain Confusion." AAAI Conference on Artificial Intelligence, 2021. doi:10.1609/AAAI.V35I11.17139Markdown
[Roy and Konidaris. "Visual Transfer for Reinforcement Learning via Wasserstein Domain Confusion." AAAI Conference on Artificial Intelligence, 2021.](https://mlanthology.org/aaai/2021/roy2021aaai-visual/) doi:10.1609/AAAI.V35I11.17139BibTeX
@inproceedings{roy2021aaai-visual,
title = {{Visual Transfer for Reinforcement Learning via Wasserstein Domain Confusion}},
author = {Roy, Josh and Konidaris, George Dimitri},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2021},
pages = {9454-9462},
doi = {10.1609/AAAI.V35I11.17139},
url = {https://mlanthology.org/aaai/2021/roy2021aaai-visual/}
}