Competitive Policy Optimization

Abstract

A core challenge in policy optimization in competitive Markov decision processes is the design of efficient optimization methods with desirable convergence and stability properties. We propose competitive policy optimization (CoPO), a novel policy gradient approach that exploits the game-theoretic nature of competitive games to derive policy updates. Motivated by the competitive gradient optimization method, we derive a bilinear approximation of the game objective. In contrast, off-the-shelf policy gradient methods utilize only linear approximations, and hence do not capture players’ interactions. We instantiate CoPO in two ways: (i) competitive policy gradient, and (ii) trust-region competitive policy optimization. We theoretically study these methods, and empirically investigate their behavior on a set of comprehensive, yet challenging, competitive games. We observe that they provide stable optimization, convergence to sophisticated strategies, and higher scores when played against baseline policy gradient methods.

Cite

Text

Prajapat et al. "Competitive Policy Optimization." Uncertainty in Artificial Intelligence, 2021.

Markdown

[Prajapat et al. "Competitive Policy Optimization." Uncertainty in Artificial Intelligence, 2021.](https://mlanthology.org/uai/2021/prajapat2021uai-competitive/)

BibTeX

@inproceedings{prajapat2021uai-competitive,
  title     = {{Competitive Policy Optimization}},
  author    = {Prajapat, Manish and Azizzadenesheli, Kamyar and Liniger, Alexander and Yue, Yisong and Anandkumar, Anima},
  booktitle = {Uncertainty in Artificial Intelligence},
  year      = {2021},
  pages     = {64-74},
  volume    = {161},
  url       = {https://mlanthology.org/uai/2021/prajapat2021uai-competitive/}
}