Wasserstein Distance Maximizing Intrinsic Control
Abstract
This paper deals with the problem of learning a skill-conditioned policy that acts meaningfully in the absence of a reward signal. Mutual information based objectives have shown some success in learning skills that reach a diverse set of states in this setting. These objectives include a KL-divergence term, which is maximized by visiting distinct states even if those states are not far apart in the MDP. This paper presents an approach that rewards the agent for learning skills that maximize the Wasserstein distance of their state visitation from the start state of the skill. It shows that such an objective leads to a policy that covers more distance in the MDP than diversity based objectives, and validates the results on a variety of Atari environments.
Cite
Text
Durugkar et al. "Wasserstein Distance Maximizing Intrinsic Control." NeurIPS 2021 Workshops: DeepRL, 2021.Markdown
[Durugkar et al. "Wasserstein Distance Maximizing Intrinsic Control." NeurIPS 2021 Workshops: DeepRL, 2021.](https://mlanthology.org/neuripsw/2021/durugkar2021neuripsw-wasserstein/)BibTeX
@inproceedings{durugkar2021neuripsw-wasserstein,
title = {{Wasserstein Distance Maximizing Intrinsic Control}},
author = {Durugkar, Ishan and Hansen, Steven Stenberg and Spencer, Stephen and Mnih, Volodymyr},
booktitle = {NeurIPS 2021 Workshops: DeepRL},
year = {2021},
url = {https://mlanthology.org/neuripsw/2021/durugkar2021neuripsw-wasserstein/}
}