Uncoupled and Convergent Learning in Monotone Games Under Bandit Feedback
Abstract
We study the problem of no-regret learning algorithms for general monotone and smooth games and their last-iterate convergence properties. Specifically, we investigate the problem under bandit feedback and strongly uncoupled dynamics, which allows modular development of the multi-player system that applies to a wide range of real applications. We propose a mirror-descent-based algorithm, which converges in $O(T^{-1/4})$ and is also no-regret. The result is achieved by a dedicated use of two regularizations and the analysis of the fixed point thereof. The convergence rate is further improved to $O(T^{-1/2})$ in the case of strongly monotone games. Motivated by practical tasks where the game evolves over time, the algorithm is extended to time-varying monotone games. We provide the first non-asymptotic result in converging monotone games and give improved results for equilibrium tracking games.
Cite
Text
Dong et al. "Uncoupled and Convergent Learning in Monotone Games Under Bandit Feedback." NeurIPS 2024 Workshops: OPT, 2024.Markdown
[Dong et al. "Uncoupled and Convergent Learning in Monotone Games Under Bandit Feedback." NeurIPS 2024 Workshops: OPT, 2024.](https://mlanthology.org/neuripsw/2024/dong2024neuripsw-uncoupled/)BibTeX
@inproceedings{dong2024neuripsw-uncoupled,
title = {{Uncoupled and Convergent Learning in Monotone Games Under Bandit Feedback}},
author = {Dong, Jing and Wang, Baoxiang and Yu, Yaoliang},
booktitle = {NeurIPS 2024 Workshops: OPT},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/dong2024neuripsw-uncoupled/}
}