Fractional Moments on Bandit Problems

Abstract

Reinforcement learning addresses the dilemma between exploration to find profitable actions and exploitation to act according to the best observations already made. Bandit problems are one such class of problems in stateless environments that represent this explore/exploit situation. We propose a learning algorithm for bandit problems based on fractional expectation of rewards acquired. The algorithm is theoretically shown to converge on an eta-optimal arm and achieve O(n) sample complexity. Experimental results show the algorithm incurs substantially lower regrets than parameter-optimized eta-greedy and SoftMax approaches and other low sample complexity state-of-the-art techniques.

Cite

Text

B. and Ravindran. "Fractional Moments on Bandit Problems." Conference on Uncertainty in Artificial Intelligence, 2011.

Markdown

[B. and Ravindran. "Fractional Moments on Bandit Problems." Conference on Uncertainty in Artificial Intelligence, 2011.](https://mlanthology.org/uai/2011/b2011uai-fractional/)

BibTeX

@inproceedings{b2011uai-fractional,
  title     = {{Fractional Moments on Bandit Problems}},
  author    = {B., Ananda Narayanan and Ravindran, Balaraman},
  booktitle = {Conference on Uncertainty in Artificial Intelligence},
  year      = {2011},
  pages     = {531-538},
  url       = {https://mlanthology.org/uai/2011/b2011uai-fractional/}
}