Learning Other Agents' Preferences in Multiagent Negotiation
Abstract
In multiagent systems, an agent does not usu-ally have complete information about the pref-erences and decision making processes of other agents. This might prevent the agents from mak-ing coordinated choices, purely due to their ig-norance of what others want. This paper de-scribes the integration of a learning module into a communication-intensive negotiating agent ar-chitecture. The learning module gives the agents the ability to learn about other agents ’ prefer-ences via past interactions. Over time, the agents can incrementally update their models of other agents ’ preferences and use them to make better coordinated decisions. Combining both commu-nication and learning, as two complement knowl-edge acquisition methods, helps to reduce the amount of communication needed on average, and is justified in situations where communica-tion is computationally costly or simply not de-sirable (e.g. to preserve the individual privacy).
Cite
Text
Bui et al. "Learning Other Agents' Preferences in Multiagent Negotiation." AAAI Conference on Artificial Intelligence, 1996.Markdown
[Bui et al. "Learning Other Agents' Preferences in Multiagent Negotiation." AAAI Conference on Artificial Intelligence, 1996.](https://mlanthology.org/aaai/1996/bui1996aaai-learning/)BibTeX
@inproceedings{bui1996aaai-learning,
title = {{Learning Other Agents' Preferences in Multiagent Negotiation}},
author = {Bui, Hung Hai and Kieronska, Dorota H. and Venkatesh, Svetha},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {1996},
pages = {114-119},
url = {https://mlanthology.org/aaai/1996/bui1996aaai-learning/}
}