Learning to Negotiate via Voluntary Commitment
Abstract
The partial alignment and conflict of autonomous agents lead to mixed-motive scenarios in many real-world applications. However, agents may fail to cooperate in practice even when cooperation yields a better outcome. One well known reason for this failure comes from non-credible commitments. To facilitate commitments among agents for better cooperation, we define Markov Commitment Games (MCGs), a variant of commitment games, where agents can voluntarily commit to their proposed future plans. Based on MCGs, we propose a learnable commitment protocol via policy gradients. We further propose incentive-compatible learning to accelerate convergence to equilibria with better social welfare. Experimental results in challenging mixed-motive tasks demonstrate faster empirical convergence and higher returns for our method compared with its counterparts. Our code is available at \url{https://github.com/shuhui-zhu/DCL.}
Cite
Text
Zhu et al. "Learning to Negotiate via Voluntary Commitment." Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, 2025.Markdown
[Zhu et al. "Learning to Negotiate via Voluntary Commitment." Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, 2025.](https://mlanthology.org/aistats/2025/zhu2025aistats-learning/)BibTeX
@inproceedings{zhu2025aistats-learning,
title = {{Learning to Negotiate via Voluntary Commitment}},
author = {Zhu, Shuhui and Wang, Baoxiang and Subramanian, Sriram Ganapathi and Poupart, Pascal},
booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics},
year = {2025},
pages = {1459-1467},
volume = {258},
url = {https://mlanthology.org/aistats/2025/zhu2025aistats-learning/}
}