Tree Based Discretization for Continuous State Space Reinforcement Learning
Abstract
Reinforcement learning is an effective technique for learning action policies in discrete stochastic environments, but its ef-ficiency can decay exponentially with the size of the state space. In many situations significant portions of a large state space may be irrelevant to a specific goal and can be aggre-gated into a few, relevant, states. The U Tree algorithm gen-erates a tree based state discretization that efficiently finds the relevant state chunks of large propositional domains. In this paper, we extend the U Tree algorithm to challenging do-mains with a continuous state space for which there is no ini-tial discretization. This Continuous U Tree algorithm trans-fers traditional regression tree techniques to reinforcement learning. We have performed experiments in a variety of do-mains that show that Continuous U Tree effectively handles large continuous state spaces. In this paper, we report on re-sults in two domains, one gives a clear visualization of the algorithm and another empirically demonstrates an effective state discretization in a simple multi-agent environment.
Cite
Text
Uther and Veloso. "Tree Based Discretization for Continuous State Space Reinforcement Learning." AAAI Conference on Artificial Intelligence, 1998.Markdown
[Uther and Veloso. "Tree Based Discretization for Continuous State Space Reinforcement Learning." AAAI Conference on Artificial Intelligence, 1998.](https://mlanthology.org/aaai/1998/uther1998aaai-tree/)BibTeX
@inproceedings{uther1998aaai-tree,
title = {{Tree Based Discretization for Continuous State Space Reinforcement Learning}},
author = {Uther, William T. B. and Veloso, Manuela M.},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {1998},
pages = {769-774},
url = {https://mlanthology.org/aaai/1998/uther1998aaai-tree/}
}