Online Learning in Stackelberg Games with an Omniscient Follower

Abstract

We study the problem of online learning in a two-player decentralized cooperative Stackelberg game. In each round, the leader first takes an action, followed by the follower who takes their action after observing the leader’s move. The goal of the leader is to learn to minimize the cumulative regret based on the history of interactions. Differing from the traditional formulation of repeated Stackelberg games, we assume the follower is omniscient, with full knowledge of the true reward, and that they always best-respond to the leader’s actions. We analyze the sample complexity of regret minimization in this repeated Stackelberg game. We show that depending on the reward structure, the existence of the omniscient follower may change the sample complexity drastically, from constant to exponential, even for linear cooperative Stackelberg games. This poses unique challenges for the learning process of the leader and the subsequent regret analysis.

Cite

Text

Zhao et al. "Online Learning in Stackelberg Games with an Omniscient Follower." International Conference on Machine Learning, 2023.

Markdown

[Zhao et al. "Online Learning in Stackelberg Games with an Omniscient Follower." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/zhao2023icml-online/)

BibTeX

@inproceedings{zhao2023icml-online,
  title     = {{Online Learning in Stackelberg Games with an Omniscient Follower}},
  author    = {Zhao, Geng and Zhu, Banghua and Jiao, Jiantao and Jordan, Michael},
  booktitle = {International Conference on Machine Learning},
  year      = {2023},
  pages     = {42304-42316},
  volume    = {202},
  url       = {https://mlanthology.org/icml/2023/zhao2023icml-online/}
}