Task-Agnostic Exploration via Policy Gradient of a Non-Parametric State Entropy Estimate
Abstract
In a reward-free environment, what is a suitable intrinsic objective for an agent to pursue so that it can learn an optimal task-agnostic exploration policy? In this paper, we argue that the entropy of the state distribution induced by finite-horizon trajectories is a sensible target. Especially, we present a novel and practical policy-search algorithm, Maximum Entropy POLicy optimization (MEPOL), to learn a policy that maximizes a non-parametric, $k$-nearest neighbors estimate of the state distribution entropy. In contrast to known methods, MEPOL is completely model-free as it requires neither to estimate the state distribution of any policy nor to model transition dynamics. Then, we empirically show that MEPOL allows learning a maximum-entropy exploration policy in high-dimensional, continuous-control domains, and how this policy facilitates learning meaningful reward-based tasks downstream.
Cite
Text
Mutti et al. "Task-Agnostic Exploration via Policy Gradient of a Non-Parametric State Entropy Estimate." AAAI Conference on Artificial Intelligence, 2021. doi:10.1609/AAAI.V35I10.17091Markdown
[Mutti et al. "Task-Agnostic Exploration via Policy Gradient of a Non-Parametric State Entropy Estimate." AAAI Conference on Artificial Intelligence, 2021.](https://mlanthology.org/aaai/2021/mutti2021aaai-task/) doi:10.1609/AAAI.V35I10.17091BibTeX
@inproceedings{mutti2021aaai-task,
title = {{Task-Agnostic Exploration via Policy Gradient of a Non-Parametric State Entropy Estimate}},
author = {Mutti, Mirco and Pratissoli, Lorenzo and Restelli, Marcello},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2021},
pages = {9028-9036},
doi = {10.1609/AAAI.V35I10.17091},
url = {https://mlanthology.org/aaai/2021/mutti2021aaai-task/}
}