A Deep Reinforcement Learning Approach to Concurrent Bilateral Negotiation
Abstract
We present a novel negotiation model that allows an agent to learn how to negotiate during concurrent bilateral negotiations in unknown and dynamic e-markets. The agent uses an actor-critic architecture with model-free reinforcement learning to learn a strategy expressed as a deep neural network. We pre-train the strategy by supervision from synthetic market data, thereby decreasing the exploration time required for learning during negotiation. As a result, we can build automated agents for concurrent negotiations that can adapt to different e-market settings without the need to be pre-programmed. Our experimental evaluation shows that our deep reinforcement learning based agents outperform two existing well-known negotiation strategies in one-to-many concurrent bilateral negotiations for a range of e-market settings.
Cite
Text
Bagga et al. "A Deep Reinforcement Learning Approach to Concurrent Bilateral Negotiation." International Joint Conference on Artificial Intelligence, 2020. doi:10.24963/IJCAI.2020/42Markdown
[Bagga et al. "A Deep Reinforcement Learning Approach to Concurrent Bilateral Negotiation." International Joint Conference on Artificial Intelligence, 2020.](https://mlanthology.org/ijcai/2020/bagga2020ijcai-deep/) doi:10.24963/IJCAI.2020/42BibTeX
@inproceedings{bagga2020ijcai-deep,
title = {{A Deep Reinforcement Learning Approach to Concurrent Bilateral Negotiation}},
author = {Bagga, Pallavi and Paoletti, Nicola and Alrayes, Bedour and Stathis, Kostas},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2020},
pages = {297-303},
doi = {10.24963/IJCAI.2020/42},
url = {https://mlanthology.org/ijcai/2020/bagga2020ijcai-deep/}
}