A Contextual Combinatorial Bandit Approach to Negotiation
Abstract
Learning effective negotiation strategies poses two key challenges: the exploration-exploitation dilemma and dealing with large action spaces. However, there is an absence of learning-based approaches that effectively address these challenges in negotiation. This paper introduces a comprehensive formulation to tackle various negotiation problems. Our approach leverages contextual combinatorial multi-armed bandits, with the bandits resolving the exploration-exploitation dilemma, and the combinatorial nature handles large action spaces. Building upon this formulation, we introduce NegUCB, a novel method that also handles common issues such as partial observations and complex reward functions in negotiation. NegUCB is contextual and tailored for full-bandit feedback without constraints on the reward functions. Under mild assumptions, it ensures a sub-linear regret upper bound. Experiments conducted on three negotiation tasks demonstrate the superiority of our approach.
Cite
Text
Li et al. "A Contextual Combinatorial Bandit Approach to Negotiation." International Conference on Machine Learning, 2024.Markdown
[Li et al. "A Contextual Combinatorial Bandit Approach to Negotiation." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/li2024icml-contextual/)BibTeX
@inproceedings{li2024icml-contextual,
title = {{A Contextual Combinatorial Bandit Approach to Negotiation}},
author = {Li, Yexin and Mu, Zhancun and Qi, Siyuan},
booktitle = {International Conference on Machine Learning},
year = {2024},
pages = {28448-28465},
volume = {235},
url = {https://mlanthology.org/icml/2024/li2024icml-contextual/}
}