Quasi-Newton Methods for Saddle Point Problems

Abstract

This paper studies quasi-Newton methods for strongly-convex-strongly-concave saddle point problems. We propose random Broyden family updates, which have explicit local superlinear convergence rate of ${\mathcal O}\big(\big(1-1/(d\varkappa^2)\big)^{k(k-1)/2}\big)$, where $d$ is the dimension of the problem, $\varkappa$ is the condition number and $k$ is the number of iterations. The design and analysis of proposed algorithm are based on estimating the square of indefinite Hessian matrix, which is different from classical quasi-Newton methods in convex optimization. We also present two specific Broyden family algorithms with BFGS-type and SR1-type updates, which enjoy the faster local convergence rate of $\mathcal O\big(\big(1-1/d\big)^{k(k-1)/2}\big)$. Our numerical experiments show proposed algorithms outperform classical first-order methods.

Cite

Text

Liu and Luo. "Quasi-Newton Methods for Saddle Point Problems." Neural Information Processing Systems, 2022.

Markdown

[Liu and Luo. "Quasi-Newton Methods for Saddle Point Problems." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/liu2022neurips-quasinewton/)

BibTeX

@inproceedings{liu2022neurips-quasinewton,
  title     = {{Quasi-Newton Methods for Saddle Point Problems}},
  author    = {Liu, Chengchang and Luo, Luo},
  booktitle = {Neural Information Processing Systems},
  year      = {2022},
  url       = {https://mlanthology.org/neurips/2022/liu2022neurips-quasinewton/}
}