Stochastic Adaptive Regularization Method with Cubics: A High Probability Complexity Bound
Abstract
We present a high probability complexity bound for a stochastic adaptive regularization method with cubics, also known as regularized Newton method. The method makes use of stochastic zeroth, first and second-order oracles that satisfy certain accuracy and reliability assumptions. Such oracles have been used in the literature by other adaptive stochastic methods, such as trust region and line search. These oracles capture many settings, such as expected risk minimization, stochastic zeroth order optimization, and others. In this paper, we give the first high-probability iteration bound for stochastic cubic regularization and show that just as in the deterministic case, it is superior to other adaptive methods.
Cite
Text
Scheinberg and Xie. "Stochastic Adaptive Regularization Method with Cubics: A High Probability Complexity Bound." NeurIPS 2022 Workshops: OPT, 2022.Markdown
[Scheinberg and Xie. "Stochastic Adaptive Regularization Method with Cubics: A High Probability Complexity Bound." NeurIPS 2022 Workshops: OPT, 2022.](https://mlanthology.org/neuripsw/2022/scheinberg2022neuripsw-stochastic/)BibTeX
@inproceedings{scheinberg2022neuripsw-stochastic,
title = {{Stochastic Adaptive Regularization Method with Cubics: A High Probability Complexity Bound}},
author = {Scheinberg, Katya and Xie, Miaolan},
booktitle = {NeurIPS 2022 Workshops: OPT},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/scheinberg2022neuripsw-stochastic/}
}