COllective INtelligence with Sequences of Actions - Coordinating Actions in Multi-Agent Systems

Abstract

The design of a Multi-Agent System (MAS) to perform well on a collective task is non-trivial. Straightforward application of learning in a MAS can lead to sub optimal solutions as agents compete or interfere. The COllective INtelligence (COIN) framework of Wolpert et al. proposes an engineering solution for MASs where agents learn to focus on actions which support a common task. As a case study, we investigate the performance of COIN for representative token retrieval problems found to be difficult for agents using classic Reinforcement Learning (RL). We further investigate several techniques from RL (model-based learning, Q( λ )) to scale application of the COIN framework. Lastly, the COIN framework is extended to improve performance for sequences of actions.

Cite

Text

Hoen and Bohté. "COllective INtelligence with Sequences of Actions - Coordinating Actions in Multi-Agent Systems." European Conference on Machine Learning, 2003. doi:10.1007/978-3-540-39857-8_18

Markdown

[Hoen and Bohté. "COllective INtelligence with Sequences of Actions - Coordinating Actions in Multi-Agent Systems." European Conference on Machine Learning, 2003.](https://mlanthology.org/ecmlpkdd/2003/hoen2003ecml-collective/) doi:10.1007/978-3-540-39857-8_18

BibTeX

@inproceedings{hoen2003ecml-collective,
  title     = {{COllective INtelligence with Sequences of Actions - Coordinating Actions in Multi-Agent Systems}},
  author    = {Hoen, Pieter Jan't and Bohté, Sander M.},
  booktitle = {European Conference on Machine Learning},
  year      = {2003},
  pages     = {181-192},
  doi       = {10.1007/978-3-540-39857-8_18},
  url       = {https://mlanthology.org/ecmlpkdd/2003/hoen2003ecml-collective/}
}