Model Based Bayesian Exploration
Abstract
Reinforcement learning systems are often concerned with balancing exploration of untested actions against exploitation of actions that are known to be good. The benefit of exploration can be estimated using the classical notion of Value of Information - the expected improvement in future decision quality arising from the information acquired by exploration. Estimating this quantity requires an assessment of the agent's uncertainty about its current value estimates for states. In this paper we investigate ways to represent and reason about this uncertainty in algorithms where the system attempts to learn a model of its environment. We explicitly represent uncertainty about the parameters of the model and build probability distributions over Q-values based on these. These distributions are used to compute a myopic approximation to the value of information for each action and hence to select the action that best balances exploration and exploitation
Cite
Text
Dearden et al. "Model Based Bayesian Exploration." Conference on Uncertainty in Artificial Intelligence, 1999.Markdown
[Dearden et al. "Model Based Bayesian Exploration." Conference on Uncertainty in Artificial Intelligence, 1999.](https://mlanthology.org/uai/1999/dearden1999uai-model/)BibTeX
@inproceedings{dearden1999uai-model,
title = {{Model Based Bayesian Exploration}},
author = {Dearden, Richard and Friedman, Nir and Andre, David},
booktitle = {Conference on Uncertainty in Artificial Intelligence},
year = {1999},
pages = {150-159},
url = {https://mlanthology.org/uai/1999/dearden1999uai-model/}
}