Non-Stationary Policy Learning in 2-Player Zero Sum Games

Abstract

A key challenge in multiagent environments is the construction of agents that are able to learn while acting in the presence of other agents that are simultaneously learning and adapting. These domains require on-line learning methods without the benefit of repeated training examples, as well as the ability to adapt to the evolving behavior of other agents in the environment. The difficulty is further exacerbated when the agents are in an adversarial relationship, demanding that a robust (i.e. winning) non-stationary policy be rapidly learned and adapted. We propose an on-line sequence learning algorithm, ELPH, based on a straightforward entropy pruning technique that is able to rapidly learn and adapt to non-stationary policies. We demonstrate the performance of this method in a non-stationary learning environment of adversarial zero-sum matrix games.

Cite

Text

Jensen et al. "Non-Stationary Policy Learning in 2-Player Zero Sum Games." AAAI Conference on Artificial Intelligence, 2005.

Markdown

[Jensen et al. "Non-Stationary Policy Learning in 2-Player Zero Sum Games." AAAI Conference on Artificial Intelligence, 2005.](https://mlanthology.org/aaai/2005/jensen2005aaai-non/)

BibTeX

@inproceedings{jensen2005aaai-non,
  title     = {{Non-Stationary Policy Learning in 2-Player Zero Sum Games}},
  author    = {Jensen, Steven and Boley, Daniel and Gini, Maria L. and Schrater, Paul R.},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2005},
  pages     = {789-794},
  url       = {https://mlanthology.org/aaai/2005/jensen2005aaai-non/}
}