Online Learning and Blackwell Approachability in Quitting Games

Abstract

We consider the sequential decision problem known as regret minimization, or more precisely its generalization to the vectorial or multi-criteria setup called Blackwell approachability. We assume that Nature, the decision maker, or both, might have some quitting (or terminating) actions so that the stream of payoffs is constant whenever they are chosen. We call those environments “quitting games”. We characterize convex target sets \cC that are Blackwell approachable, in the sense that the decision maker has a policy ensuring that the expected average vector payoff converges to \cC at some given horizon known in advance. Moreover, we also compare these results to the cases where the horizon is not known and show that, unlike in standard online learning literature, the necessary or sufficient conditions for the anytime version of this problem are drastically different than those for the fixed horizon.

Cite

Text

Flesch et al. "Online Learning and Blackwell Approachability in Quitting Games." Annual Conference on Computational Learning Theory, 2016.

Markdown

[Flesch et al. "Online Learning and Blackwell Approachability in Quitting Games." Annual Conference on Computational Learning Theory, 2016.](https://mlanthology.org/colt/2016/flesch2016colt-online/)

BibTeX

@inproceedings{flesch2016colt-online,
  title     = {{Online Learning and Blackwell Approachability in Quitting Games}},
  author    = {Flesch, János and Laraki, Rida and Perchet, Vianney},
  booktitle = {Annual Conference on Computational Learning Theory},
  year      = {2016},
  pages     = {941-942},
  url       = {https://mlanthology.org/colt/2016/flesch2016colt-online/}
}