Learning When to Collaborate Among Learning Agents

Abstract

Multiagent systems offer a new paradigm where learning techniques can be useful. We focus on the application of lazy learning to multiagent systems where each agents learns individually and also learns when to cooperate in order to improve its performance. We show some experiments in which CBR agents use an adapted version of LID (Lazy Induction of Descriptions), a CBR method for classification. We discuss a collaboration policy (called Bounded Counsel) among agents that improves the agents’ performance with respect to their isolated performance. Later, we use decision tree induction and discretization techniques to learn how to tune the Bounded Counsel policy to a specific multiagent system—preserving always the individual autonomy of agents and the privacy of their case-bases. Empirical results concerning accuracy, cost, and robustness with respect to number of agents and case base size are presented. Moreover, comparisons with the Committee collaboration policy (where all agents collaborate always) are also presented.

Cite

Text

Ontañón and Plaza. "Learning When to Collaborate Among Learning Agents." European Conference on Machine Learning, 2001. doi:10.1007/3-540-44795-4_34

Markdown

[Ontañón and Plaza. "Learning When to Collaborate Among Learning Agents." European Conference on Machine Learning, 2001.](https://mlanthology.org/ecmlpkdd/2001/ontanon2001ecml-learning/) doi:10.1007/3-540-44795-4_34

BibTeX

@inproceedings{ontanon2001ecml-learning,
  title     = {{Learning When to Collaborate Among Learning Agents}},
  author    = {Ontañón, Santiago and Plaza, Enric},
  booktitle = {European Conference on Machine Learning},
  year      = {2001},
  pages     = {394-405},
  doi       = {10.1007/3-540-44795-4_34},
  url       = {https://mlanthology.org/ecmlpkdd/2001/ontanon2001ecml-learning/}
}