Matrix Games with Bandit Feedback
Abstract
We study a version of the classical zero-sum matrix game with unknown payoff matrix and bandit feedback, where the players only observe each others actions and a noisy payoff. This generalizes the usual matrix game, where the payoff matrix is known to the players. Despite numerous applications, this problem has received relatively little attention. Although adversarial bandit algorithms achieve low regret, they do not exploit the matrix structure and perform poorly relative to the new algorithms. The main contributions are regret analyses of variants of UCB and K-learning that hold for any opponent, e.g., even when the opponent adversarially plays the best-response to the learner’s mixed strategy. Along the way, we show that Thompson fails catastrophically in this setting and provide empirical comparison to existing algorithms.
Cite
Text
O’Donoghue et al. "Matrix Games with Bandit Feedback." Uncertainty in Artificial Intelligence, 2021.Markdown
[O’Donoghue et al. "Matrix Games with Bandit Feedback." Uncertainty in Artificial Intelligence, 2021.](https://mlanthology.org/uai/2021/odonoghue2021uai-matrix/)BibTeX
@inproceedings{odonoghue2021uai-matrix,
title = {{Matrix Games with Bandit Feedback}},
author = {O’Donoghue, Brendan and Lattimore, Tor and Osband, Ian},
booktitle = {Uncertainty in Artificial Intelligence},
year = {2021},
pages = {279-289},
volume = {161},
url = {https://mlanthology.org/uai/2021/odonoghue2021uai-matrix/}
}