On Frank-Wolfe and Equilibrium Computation

Abstract

We consider the Frank-Wolfe (FW) method for constrained convex optimization, and we show that this classical technique can be interpreted from a different perspective: FW emerges as the computation of an equilibrium (saddle point) of a special convex-concave zero sum game. This saddle-point trick relies on the existence of no-regret online learning to both generate a sequence of iterates but also to provide a proof of convergence through vanishing regret. We show that our stated equivalence has several nice properties, as it exhibits a modularity that gives rise to various old and new algorithms. We explore a few such resulting methods, and provide experimental results to demonstrate correctness and efficiency.

Cite

Text

Abernethy and Wang. "On Frank-Wolfe and Equilibrium Computation." Neural Information Processing Systems, 2017.

Markdown

[Abernethy and Wang. "On Frank-Wolfe and Equilibrium Computation." Neural Information Processing Systems, 2017.](https://mlanthology.org/neurips/2017/abernethy2017neurips-frankwolfe/)

BibTeX

@inproceedings{abernethy2017neurips-frankwolfe,
  title     = {{On Frank-Wolfe and Equilibrium Computation}},
  author    = {Abernethy, Jacob D. and Wang, Jun-Kun},
  booktitle = {Neural Information Processing Systems},
  year      = {2017},
  pages     = {6584-6593},
  url       = {https://mlanthology.org/neurips/2017/abernethy2017neurips-frankwolfe/}
}