Achieving Cooperation in a Minimally Constrained Environment

Abstract

We describe a simple environment to study cooperation be-tween two agents and a method of achieving cooperation in that environment. The environment consists of randomly gen-erated normal form games with uniformly distributed pay-offs. Agents play multiple games against each other, each game drawn independently from the random distribution. In this environment cooperation is difficult. Tit-for-Tat can-not be used because moves are not labeled as “cooperate” or “defect”, fictitious play cannot be used because the agent never sees the same game twice, and approaches suitable for stochastic games cannot be used because the set of states is not finite. Our agent identifies cooperative moves by assign-ing an attitude to its opponent and to itself. The attitude de-termines how much a player values its opponents payoff, i.e how much the player is willing to deviate from strictly self-interested behavior. To cooperate, our agent estimates the at-titude of its opponent by observing its moves and reciprocates by setting its own attitude accordingly. We show how the op-ponent’s attitude can be estimated using a particle filter, even when the opponent is changing its attitude.

Cite

Text

Damer and Gini. "Achieving Cooperation in a Minimally Constrained Environment." AAAI Conference on Artificial Intelligence, 2008.

Markdown

[Damer and Gini. "Achieving Cooperation in a Minimally Constrained Environment." AAAI Conference on Artificial Intelligence, 2008.](https://mlanthology.org/aaai/2008/damer2008aaai-achieving/)

BibTeX

@inproceedings{damer2008aaai-achieving,
  title     = {{Achieving Cooperation in a Minimally Constrained Environment}},
  author    = {Damer, Steven and Gini, Maria L.},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2008},
  pages     = {57-62},
  url       = {https://mlanthology.org/aaai/2008/damer2008aaai-achieving/}
}