Complex Momentum for Optimization in Games

Abstract

We generalize gradient descent with momentum for optimization in differentiable games to have complex-valued momentum. We give theoretical motivation for our method by proving convergence on bilinear zero-sum games for simultaneous and alternating updates. Our method gives real-valued parameter updates, making it a drop-in replacement for standard optimizers. We empirically demonstrate that complex-valued momentum can improve convergence in realistic adversarial games–like generative adversarial networks– by showing we can find better solutions with an almost identical computational cost. We also show a practical complex-valued Adam variant, which we use to train BigGAN to improve inception scores on CIFAR-10.

Cite

Text

Lorraine et al. "Complex Momentum for Optimization in Games." Artificial Intelligence and Statistics, 2022.

Markdown

[Lorraine et al. "Complex Momentum for Optimization in Games." Artificial Intelligence and Statistics, 2022.](https://mlanthology.org/aistats/2022/lorraine2022aistats-complex/)

BibTeX

@inproceedings{lorraine2022aistats-complex,
  title     = {{Complex Momentum for Optimization in Games}},
  author    = {Lorraine, Jonathan P. and Acuna, David and Vicol, Paul and Duvenaud, David},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2022},
  pages     = {7742-7765},
  volume    = {151},
  url       = {https://mlanthology.org/aistats/2022/lorraine2022aistats-complex/}
}