Probabilistic Line Searches for Stochastic Optimization

Abstract

In deterministic optimization, line searches are a standard tool ensuring stability and efficiency. Where only stochastic gradients are available, no direct equivalent has so far been formulated, because uncertain gradients do not allow for a strict sequence of decisions collapsing the search space. We construct a probabilistic line search by combining the structure of existing deterministic methods with notions from Bayesian optimization. Our method retains a Gaussian process surrogate of the univariate optimization objective, and uses a probabilistic belief over the Wolfe conditions to monitor the descent. The algorithm has very low computational cost, and no user- controlled parameters. Experiments show that it effectively removes the need to define a learning rate for stochastic gradient descent.

Cite

Text

Mahsereci and Hennig. "Probabilistic Line Searches for Stochastic Optimization." Journal of Machine Learning Research, 2017.

Markdown

[Mahsereci and Hennig. "Probabilistic Line Searches for Stochastic Optimization." Journal of Machine Learning Research, 2017.](https://mlanthology.org/jmlr/2017/mahsereci2017jmlr-probabilistic/)

BibTeX

@article{mahsereci2017jmlr-probabilistic,
  title     = {{Probabilistic Line Searches for Stochastic Optimization}},
  author    = {Mahsereci, Maren and Hennig, Philipp},
  journal   = {Journal of Machine Learning Research},
  year      = {2017},
  pages     = {1-59},
  volume    = {18},
  url       = {https://mlanthology.org/jmlr/2017/mahsereci2017jmlr-probabilistic/}
}