Improving Policy Gradient Estimates with Influence Information
Abstract
In reinforcement learning (RL) it is often possible to obtain sound, but incomplete, information about influences and independencies among problem variables and rewards, even when an exact domain model is unknown. For example, such information can be computed based on a partial, qualitative domain model, or via domain-specific analysis techniques. While, intuitively, such information appears useful for RL, there are no algorithms that incorporate it in a sound way. In this work, we describe how to leverage such information for improving the estimation of policy gradients, which can be used to speedup gradient-based RL. We prove general conditions under which our estimator is unbiased and show that it will typically have reduced variance compared to standard unbiased gradient estimates. We evaluate the approach in the domain of Adaptation-Based Programming where RL is used to optimize the performance of programs and independence information can be computed via standard program analysis techniques. Incorporating independence information produces a large speedup in learning on a variety of adaptive programs.
Cite
Text
Pinto et al. "Improving Policy Gradient Estimates with Influence Information." Proceedings of the Third Asian Conference on Machine Learning, 2011.Markdown
[Pinto et al. "Improving Policy Gradient Estimates with Influence Information." Proceedings of the Third Asian Conference on Machine Learning, 2011.](https://mlanthology.org/acml/2011/pinto2011acml-improving/)BibTeX
@inproceedings{pinto2011acml-improving,
title = {{Improving Policy Gradient Estimates with Influence Information}},
author = {Pinto, Jervis and Fern, Alan and Bauer, Tim and Erwig, Martin},
booktitle = {Proceedings of the Third Asian Conference on Machine Learning},
year = {2011},
pages = {1-18},
volume = {20},
url = {https://mlanthology.org/acml/2011/pinto2011acml-improving/}
}