Relativized Options: Choosing the Right Transformation
Abstract
Relativized options combine model minimization methods and a hierarchical reinforcement learning framework to derive compact reduced representations of a related family of tasks. Relativized options are defined without an absolute frame of reference, and an option’s policy is transformed suitably based on the circumstances under which the option is invoked. In earlier work we addressed the issue of learning the option policy online. In this article we develop an algorithm for choosing, from among a set of candidate transformations, the right transformation for each member of the family of tasks. ICML Proceedings of the Twentieth International Conference on Machine Learning
Cite
Text
Ravindran and Barto. "Relativized Options: Choosing the Right Transformation." International Conference on Machine Learning, 2003.Markdown
[Ravindran and Barto. "Relativized Options: Choosing the Right Transformation." International Conference on Machine Learning, 2003.](https://mlanthology.org/icml/2003/ravindran2003icml-relativized/)BibTeX
@inproceedings{ravindran2003icml-relativized,
title = {{Relativized Options: Choosing the Right Transformation}},
author = {Ravindran, Balaraman and Barto, Andrew G.},
booktitle = {International Conference on Machine Learning},
year = {2003},
pages = {608-615},
url = {https://mlanthology.org/icml/2003/ravindran2003icml-relativized/}
}