Stochastic Frank-Wolfe for Constrained Finite-Sum Minimization
Abstract
We propose a novel Stochastic Frank-Wolfe (a. k. a. conditional gradient) algorithm for constrained smooth finite-sum minimization with a generalized linear prediction/structure. This class of problems includes empirical risk minimization with sparse, low-rank, or other structured constraints. The proposed method is simple to implement, does not require step-size tuning, and has a constant per-iteration cost that is independent of the dataset size. Furthermore, as a byproduct of the method we obtain a stochastic estimator of the Frank-Wolfe gap that can be used as a stopping criterion. Depending on the setting, the proposed method matches or improves on the best computational guarantees for Stochastic Frank-Wolfe algorithms. Benchmarks on several datasets highlight different regimes in which the proposed method exhibits a faster empirical convergence than related methods. Finally, we provide an implementation of all considered methods in an open-source package.
Cite
Text
Negiar et al. "Stochastic Frank-Wolfe for Constrained Finite-Sum Minimization." International Conference on Machine Learning, 2020.Markdown
[Negiar et al. "Stochastic Frank-Wolfe for Constrained Finite-Sum Minimization." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/negiar2020icml-stochastic/)BibTeX
@inproceedings{negiar2020icml-stochastic,
title = {{Stochastic Frank-Wolfe for Constrained Finite-Sum Minimization}},
author = {Negiar, Geoffrey and Dresdner, Gideon and Tsai, Alicia and El Ghaoui, Laurent and Locatello, Francesco and Freund, Robert and Pedregosa, Fabian},
booktitle = {International Conference on Machine Learning},
year = {2020},
pages = {7253-7262},
volume = {119},
url = {https://mlanthology.org/icml/2020/negiar2020icml-stochastic/}
}