No-Regret Learning in Unknown Games with Correlated Payoffs
Abstract
We consider the problem of learning to play a repeated multi-agent game with an unknown reward function. Single player online learning algorithms attain strong regret bounds when provided with full information feedback, which unfortunately is unavailable in many real-world scenarios. Bandit feedback alone, i.e., observing outcomes only for the selected action, yields substantially worse performance. In this paper, we consider a natural model where, besides a noisy measurement of the obtained reward, the player can also observe the opponents' actions. This feedback model, together with a regularity assumption on the reward function, allows us to exploit the correlations among different game outcomes by means of Gaussian processes (GPs). We propose a novel confidence-bound based bandit algorithm GP-MW, which utilizes the GP model for the reward function and runs a multiplicative weight (MW) method. We obtain novel kernel-dependent regret bounds that are comparable to the known bounds in the full information setting, while substantially improving upon the existing bandit results. We experimentally demonstrate the effectiveness of GP-MW in random matrix games, as well as real-world problems of traffic routing and movie recommendation. In our experiments, GP-MW consistently outperforms several baselines, while its performance is often comparable to methods that have access to full information feedback.
Cite
Text
Sessa et al. "No-Regret Learning in Unknown Games with Correlated Payoffs." Neural Information Processing Systems, 2019.Markdown
[Sessa et al. "No-Regret Learning in Unknown Games with Correlated Payoffs." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/sessa2019neurips-noregret/)BibTeX
@inproceedings{sessa2019neurips-noregret,
title = {{No-Regret Learning in Unknown Games with Correlated Payoffs}},
author = {Sessa, Pier Giuseppe and Bogunovic, Ilija and Kamgarpour, Maryam and Krause, Andreas},
booktitle = {Neural Information Processing Systems},
year = {2019},
pages = {13624-13633},
url = {https://mlanthology.org/neurips/2019/sessa2019neurips-noregret/}
}