A Finite Memory Automaton for Two-Armed Bernoulli Bandit Problems

Abstract

Existing approaches to the multi-armed bandit (MAB) primarily rely on perfect recall of past actions to generate estimates for arm payoff probabilities; it is further assumed that the decision maker knows whether arm payoff probabilities can change. To capture the computational limitations many decision making systems face, we explore performance under bounded resources in the form of imperfect recall of past information. We present a finite memory automaton (FMA) designed to solve static and dynamic MAB problems. The FMA demonstrates that an agent can learn a low regret strategy without knowing whether arm payoff probabilities are static or dynamic and without having perfect recall of past actions. Roughly speaking, the automaton works by maintaining a relative ranking of arms rather than estimating precise payoff probabilities.

Cite

Text

Rao. "A Finite Memory Automaton for Two-Armed Bernoulli Bandit Problems." AAAI Conference on Artificial Intelligence, 2017. doi:10.1609/AAAI.V31I1.11115

Markdown

[Rao. "A Finite Memory Automaton for Two-Armed Bernoulli Bandit Problems." AAAI Conference on Artificial Intelligence, 2017.](https://mlanthology.org/aaai/2017/rao2017aaai-finite/) doi:10.1609/AAAI.V31I1.11115

BibTeX

@inproceedings{rao2017aaai-finite,
  title     = {{A Finite Memory Automaton for Two-Armed Bernoulli Bandit Problems}},
  author    = {Rao, Ariel},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2017},
  pages     = {4981-4982},
  doi       = {10.1609/AAAI.V31I1.11115},
  url       = {https://mlanthology.org/aaai/2017/rao2017aaai-finite/}
}