Open Problem: Fast Stochastic Exp-Concave Optimization
Abstract
Stochastic exp-concave optimization is an important primitive in machine learning that captures several fundamental problems, including linear regression, logistic regression and more. The exp-concavity property allows for fast convergence rates, as compared to general stochastic optimization. However, current algorithms that attain such rates scale poorly with the dimension n and run in time O(n^4), even on very simple instances of the problem. The question we pose is whether it is possible to obtain fast rates for exp-concave functions using more computationally-efficient algorithms.
Cite
Text
Koren. "Open Problem: Fast Stochastic Exp-Concave Optimization." Annual Conference on Computational Learning Theory, 2013.Markdown
[Koren. "Open Problem: Fast Stochastic Exp-Concave Optimization." Annual Conference on Computational Learning Theory, 2013.](https://mlanthology.org/colt/2013/koren2013colt-open/)BibTeX
@inproceedings{koren2013colt-open,
title = {{Open Problem: Fast Stochastic Exp-Concave Optimization}},
author = {Koren, Tomer},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2013},
pages = {1073-1075},
url = {https://mlanthology.org/colt/2013/koren2013colt-open/}
}