TD-DeltaPi: A Model-Free Algorithm for Efficient Exploration

Abstract

We study the problem of finding efficient exploration policies for the case in which an agent is momentarily not concerned with exploiting, and instead tries to compute a policy for later use. We first formally define the Optimal Exploration Problem as one of sequential sampling and show that its solutions correspond to paths of minimum expected length in the space of policies. We derive a model-free, local linear approximation to such solutions and use it to construct efficient exploration policies. We compare our model-free approach to other exploration techniques, including one with the best known PAC bounds, and show that ours is both based on a well-defined optimization problem and empirically efficient.

Cite

Text

da Silva and Barto. "TD-DeltaPi: A Model-Free Algorithm for Efficient Exploration." AAAI Conference on Artificial Intelligence, 2012. doi:10.1609/AAAI.V26I1.8286

Markdown

[da Silva and Barto. "TD-DeltaPi: A Model-Free Algorithm for Efficient Exploration." AAAI Conference on Artificial Intelligence, 2012.](https://mlanthology.org/aaai/2012/dasilva2012aaai-td/) doi:10.1609/AAAI.V26I1.8286

BibTeX

@inproceedings{dasilva2012aaai-td,
  title     = {{TD-DeltaPi: A Model-Free Algorithm for Efficient Exploration}},
  author    = {da Silva, Bruno Castro and Barto, Andrew G.},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2012},
  pages     = {886-892},
  doi       = {10.1609/AAAI.V26I1.8286},
  url       = {https://mlanthology.org/aaai/2012/dasilva2012aaai-td/}
}