Learning to Coordinate Actions in Multi-Agent-Systems
Abstract
This paper deals with learning in reactive multi-agent systems. The central problem addressed is how several agents can collectively learn to coordinate their actions such that they solve a given environmental task together. In approaching this problem, two important constraints have to be taken into consideration: the incompatibility constraint, that is, the fact that different actions may be mutually exclusive; and the local information constraint, that is, the fact that each agent typically knows only a fraction of its environment. The contents of the paper is as follows. First, the topic of learning in multi-agent systems is motivated (section 1). Then, two algorithms called ACE and AGE (standing for ACtion Estimation and Action Group Estimation, respectively) for the reinforcement learning of appropriate sequences of action sets in multi agent systems are described (section 2). Next, experimental results illustrating the learning abilities of these algorithms are presented (section 3). Finally, the algorithms are discussed and an outlook on future research is provided (section 4).
Cite
Text
Weiss. "Learning to Coordinate Actions in Multi-Agent-Systems." International Joint Conference on Artificial Intelligence, 1993.Markdown
[Weiss. "Learning to Coordinate Actions in Multi-Agent-Systems." International Joint Conference on Artificial Intelligence, 1993.](https://mlanthology.org/ijcai/1993/weiss1993ijcai-learning/)BibTeX
@inproceedings{weiss1993ijcai-learning,
title = {{Learning to Coordinate Actions in Multi-Agent-Systems}},
author = {Weiss, Gerhard},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {1993},
pages = {311-317},
url = {https://mlanthology.org/ijcai/1993/weiss1993ijcai-learning/}
}