Escaping from Moderately Constrained Saddles
Abstract
We give polynomial time algorithms for escaping from high-dimensional saddle points under a moderate number of constraints. Given gradient access to a smooth function $f \colon \mathbb R^d \to \mathbb R$ we show that (noisy) gradient descent methods can escape from saddle points under a logarithmic number of inequality constraints. This constitutes progress (without reliance on NP-oracles or altering the definitions to only account for certain constraints) on the main open question of the breakthrough work of Ge et al. who showed an analogous result for unconstrained and equality-constrained problems. Our results hold for both regular and stochastic gradient descent.
Cite
Text
Avdiukhin and Yaroslavtsev. "Escaping from Moderately Constrained Saddles." NeurIPS 2022 Workshops: OPT, 2022.Markdown
[Avdiukhin and Yaroslavtsev. "Escaping from Moderately Constrained Saddles." NeurIPS 2022 Workshops: OPT, 2022.](https://mlanthology.org/neuripsw/2022/avdiukhin2022neuripsw-escaping/)BibTeX
@inproceedings{avdiukhin2022neuripsw-escaping,
title = {{Escaping from Moderately Constrained Saddles}},
author = {Avdiukhin, Dmitrii and Yaroslavtsev, Grigory},
booktitle = {NeurIPS 2022 Workshops: OPT},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/avdiukhin2022neuripsw-escaping/}
}