Uncoupled and Convergent Learning in Monotone Games Under Bandit Feedback

Abstract

We study the problem of no-regret learning algorithms for general monotone and smooth games and their last-iterate convergence properties. Specifically, we investigate the problem under bandit feedback and strongly uncoupled dynamics, which allows modular development of the multi-player system that applies to a wide range of real applications. We propose a mirror-descent-based algorithm, which converges in $O(T^{-1/4})$ and is also no-regret. The result is achieved by a dedicated use of two regularizations and the analysis of the fixed point thereof. The convergence rate is further improved to $O(T^{-1/2})$ in the case of strongly monotone games. Motivated by practical tasks where the game evolves over time, the algorithm is extended to time-varying monotone games. We provide the first non-asymptotic result in converging monotone games and give improved results for equilibrium tracking games.

Cite

Text

Dong et al. "Uncoupled and Convergent Learning in Monotone Games Under Bandit Feedback." Advances in Neural Information Processing Systems, 2025.

Markdown

[Dong et al. "Uncoupled and Convergent Learning in Monotone Games Under Bandit Feedback." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/dong2025neurips-uncoupled/)

BibTeX

@inproceedings{dong2025neurips-uncoupled,
  title     = {{Uncoupled and Convergent Learning in Monotone Games Under Bandit Feedback}},
  author    = {Dong, Jing and Wang, Baoxiang and Yu, Yaoliang},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/dong2025neurips-uncoupled/}
}