Asynchronous Upper Confidence Bound Algorithms for Federated Linear Bandits
Abstract
Linear contextual bandit is a popular online learning problem. It has been mostly studied in centralized learning settings. With the surging demand of large-scale decentralized model learning, e.g., federated learning, how to retain regret minimization while reducing communication cost becomes an open challenge. In this paper, we study linear contextual bandit in a federated learning setting. We propose a general framework with asynchronous model update and communication for a collection of homogeneous clients and heterogeneous clients, respectively. Rigorous theoretical analysis is provided about the regret and communication cost under this distributed learning framework; and extensive empirical evaluations demonstrate the effectiveness of our solution.
Cite
Text
Li and Wang. "Asynchronous Upper Confidence Bound Algorithms for Federated Linear Bandits." Artificial Intelligence and Statistics, 2022.Markdown
[Li and Wang. "Asynchronous Upper Confidence Bound Algorithms for Federated Linear Bandits." Artificial Intelligence and Statistics, 2022.](https://mlanthology.org/aistats/2022/li2022aistats-asynchronous/)BibTeX
@inproceedings{li2022aistats-asynchronous,
title = {{Asynchronous Upper Confidence Bound Algorithms for Federated Linear Bandits}},
author = {Li, Chuanhao and Wang, Hongning},
booktitle = {Artificial Intelligence and Statistics},
year = {2022},
pages = {6529-6553},
volume = {151},
url = {https://mlanthology.org/aistats/2022/li2022aistats-asynchronous/}
}