Co-Operative Reinforcement Learning by Payoff Filters (Extended Abstract)
Abstract
This paper proposes an extension of Reinforcement Learning (RL) to acquire co-operation among agents. The idea is to learn filtered payoff that reflects a global objective function but does not require mass communication among agents. It is shown that the acquisition of two typical co-operation tasks is realised by preparing simple filter functions: an averaging filter for co-operative tasks and an enhancement filter for deadlock prevention tasks. The performance of these systems was tested through computer simulations of n-persons prisoner's dilemma, and a traffic control problem.
Cite
Text
Mikami et al. "Co-Operative Reinforcement Learning by Payoff Filters (Extended Abstract)." European Conference on Machine Learning, 1995. doi:10.1007/3-540-59286-5_77Markdown
[Mikami et al. "Co-Operative Reinforcement Learning by Payoff Filters (Extended Abstract)." European Conference on Machine Learning, 1995.](https://mlanthology.org/ecmlpkdd/1995/mikami1995ecml-cooperative/) doi:10.1007/3-540-59286-5_77BibTeX
@inproceedings{mikami1995ecml-cooperative,
title = {{Co-Operative Reinforcement Learning by Payoff Filters (Extended Abstract)}},
author = {Mikami, Sadayoshi and Kakazu, Yukinori and Fogarty, Terence C.},
booktitle = {European Conference on Machine Learning},
year = {1995},
pages = {319-322},
doi = {10.1007/3-540-59286-5_77},
url = {https://mlanthology.org/ecmlpkdd/1995/mikami1995ecml-cooperative/}
}