Dimension Reduction and Its Application to Model-Based Exploration in Continuous Spaces
Abstract
The sample complexity of a reinforcement-learning algorithm is highly coupled to how proficiently it explores, which in turn depends critically on the effective size of its state space. This paper proposes a new exploration mechanism for model-based algorithms in continuous state spaces that automatically discovers the relevant dimensions of the environment. We show that this information can be used to dramatically decrease the sample complexity of the algorithm over conventional exploration techniques. This improvement is achieved by maintaining a low-dimensional representation of the transition function. Empirical evaluations in several environments, including simulation benchmarks and a real robotics domain, suggest that the new method outperforms state-of-the-art algorithms and that the behavior is robust and stable.
Cite
Text
Nouri and Littman. "Dimension Reduction and Its Application to Model-Based Exploration in Continuous Spaces." Machine Learning, 2010. doi:10.1007/S10994-010-5202-YMarkdown
[Nouri and Littman. "Dimension Reduction and Its Application to Model-Based Exploration in Continuous Spaces." Machine Learning, 2010.](https://mlanthology.org/mlj/2010/nouri2010mlj-dimension/) doi:10.1007/S10994-010-5202-YBibTeX
@article{nouri2010mlj-dimension,
title = {{Dimension Reduction and Its Application to Model-Based Exploration in Continuous Spaces}},
author = {Nouri, Ali and Littman, Michael L.},
journal = {Machine Learning},
year = {2010},
pages = {85-98},
doi = {10.1007/S10994-010-5202-Y},
volume = {81},
url = {https://mlanthology.org/mlj/2010/nouri2010mlj-dimension/}
}