Policy Iteration Based on a Learned Transition Model
Abstract
This paper investigates a reinforcement learning method that combines learning a model of the environment with least-squares policy iteration (LSPI). The LSPI algorithm learns a linear approximation of the optimal state-action value function; the idea studied here is to let this value function depend on a learned estimate of the expected next state instead of directly on the current state and action. This approach makes it easier to define useful basis functions, and hence to learn a useful linear approximation of the value function. Experiments show that the new algorithm, called NSPI for next-state policy iteration, performs well on two standard benchmarks, the well-known mountain car and inverted pendulum swing-up tasks. More importantly, the NSPI algorithm performs well, and better than a specialized recent method, on a resource management task known as the day-ahead wind commitment problem. This latter task has action and state spaces that are high-dimensional and continuous.
Cite
Text
Ramavajjala and Elkan. "Policy Iteration Based on a Learned Transition Model." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2012. doi:10.1007/978-3-642-33486-3_14Markdown
[Ramavajjala and Elkan. "Policy Iteration Based on a Learned Transition Model." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2012.](https://mlanthology.org/ecmlpkdd/2012/ramavajjala2012ecmlpkdd-policy/) doi:10.1007/978-3-642-33486-3_14BibTeX
@inproceedings{ramavajjala2012ecmlpkdd-policy,
title = {{Policy Iteration Based on a Learned Transition Model}},
author = {Ramavajjala, Vivek and Elkan, Charles},
booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases},
year = {2012},
pages = {211-226},
doi = {10.1007/978-3-642-33486-3_14},
url = {https://mlanthology.org/ecmlpkdd/2012/ramavajjala2012ecmlpkdd-policy/}
}