Fast Concurrent Reinforcement Learners

Abstract

When several agents learn concurrently, the payoff received by an agent is dependent on the behavior of the other agents. As the other agents learn, the reward of one agent becomes non-stationary. This makes learning in multiagent systems more difficult than single-agent learning. A few methods, however, are known to guarantee convergence to equilibrium in the limit in such systems. In this paper we experimentally study one such technique, the minimax-Q, in a competitive domain and prove its equivalence with another well-known method for competitive domains. We study the rate of convergence of minimax-Q and investigate possible ways for increasing the same. We also present a variant of the algorithm, minimax-SARSA, and prove its convergence to minimax-Q values under appropriate conditions. Finally we show that this new algorithm performs better than simple minimax-Q in a general-sum domain as well. 1

Cite

Text

Banerjee et al. "Fast Concurrent Reinforcement Learners." International Joint Conference on Artificial Intelligence, 2001.

Markdown

[Banerjee et al. "Fast Concurrent Reinforcement Learners." International Joint Conference on Artificial Intelligence, 2001.](https://mlanthology.org/ijcai/2001/banerjee2001ijcai-fast/)

BibTeX

@inproceedings{banerjee2001ijcai-fast,
  title     = {{Fast Concurrent Reinforcement Learners}},
  author    = {Banerjee, Bikramjit and Sen, Sandip and Peng, Jing},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2001},
  pages     = {825-832},
  url       = {https://mlanthology.org/ijcai/2001/banerjee2001ijcai-fast/}
}