Experience-Efficient Learning in Associative Bandit Problems

Abstract

We formalize the associative bandit problem framework introduced by Kaelbling as a learning-theory problem. The learning environment is modeled as a k-armed bandit where arm payoffs are conditioned on an observable input selected on each trial. We show that, if the payoff functions are constrained to a known hypothesis class, learning can be performed efficiently with respect to the VC dimension of this class. We formally reduce the problem of PAC classification to the associative bandit problem, producing an efficient algorithm for any hypothesis class for which efficient classification algorithms are known. We demonstrate the approach empirically on a scalable concept class.

Cite

Text

Strehl et al. "Experience-Efficient Learning in Associative Bandit Problems." International Conference on Machine Learning, 2006. doi:10.1145/1143844.1143956

Markdown

[Strehl et al. "Experience-Efficient Learning in Associative Bandit Problems." International Conference on Machine Learning, 2006.](https://mlanthology.org/icml/2006/strehl2006icml-experience/) doi:10.1145/1143844.1143956

BibTeX

@inproceedings{strehl2006icml-experience,
  title     = {{Experience-Efficient Learning in Associative Bandit Problems}},
  author    = {Strehl, Alexander L. and Mesterharm, Chris and Littman, Michael L. and Hirsh, Haym},
  booktitle = {International Conference on Machine Learning},
  year      = {2006},
  pages     = {889-896},
  doi       = {10.1145/1143844.1143956},
  url       = {https://mlanthology.org/icml/2006/strehl2006icml-experience/}
}