Optimal Rates and Efficient Algorithms for Online Bayesian Persuasion

Abstract

Bayesian persuasion studies how an informed sender should influence beliefs of rational receivers that take decisions through Bayesian updating of a common prior. We focus on the online Bayesian persuasion framework, in which the sender repeatedly faces one or more receivers with unknown and adversarially selected types. First, we show how to obtain a tight $\tilde O(T^{1/2})$ regret bound in the case in which the sender faces a single receiver and has bandit feedback, improving over the best previously known bound of $\tilde O(T^{4/5})$. Then, we provide the first no-regret guarantees for the multi-receiver setting under bandit feedback. Finally, we show how to design no-regret algorithms with polynomial per-iteration running time by exploiting type reporting, thereby circumventing known complexity results on online Bayesian persuasion. We provide efficient algorithms guaranteeing a $O(T^{1/2})$ regret upper bound both in the single- and multi-receiver scenario when type reporting is allowed.

Cite

Text

Bernasconi et al. "Optimal Rates and Efficient Algorithms for Online Bayesian Persuasion." International Conference on Machine Learning, 2023.

Markdown

[Bernasconi et al. "Optimal Rates and Efficient Algorithms for Online Bayesian Persuasion." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/bernasconi2023icml-optimal/)

BibTeX

@inproceedings{bernasconi2023icml-optimal,
  title     = {{Optimal Rates and Efficient Algorithms for Online Bayesian Persuasion}},
  author    = {Bernasconi, Martino and Castiglioni, Matteo and Celli, Andrea and Marchesi, Alberto and Trovò, Francesco and Gatti, Nicola},
  booktitle = {International Conference on Machine Learning},
  year      = {2023},
  pages     = {2164-2183},
  volume    = {202},
  url       = {https://mlanthology.org/icml/2023/bernasconi2023icml-optimal/}
}