Exploring Local Norms in Exp-Concave Statistical Learning
Abstract
We consider the standard problem of stochastic convex optimization with exp-concave losses using Empirical Risk Minimization in a convex class. Answering a question raised in several prior works, we provide a $O ( d/n + 1/n \log( 1 / \delta ) )$ excess risk bound valid for a wide class of bounded exp-concave losses, where $d$ is the dimension of the convex reference set, $n$ is the sample size, and $\delta$ is the confidence level. Our result is based on a unified geometric assumption on the gradient of losses and the notion of local norms.
Cite
Text
Puchkin and Zhivotovskiy. "Exploring Local Norms in Exp-Concave Statistical Learning." Conference on Learning Theory, 2023.Markdown
[Puchkin and Zhivotovskiy. "Exploring Local Norms in Exp-Concave Statistical Learning." Conference on Learning Theory, 2023.](https://mlanthology.org/colt/2023/puchkin2023colt-exploring/)BibTeX
@inproceedings{puchkin2023colt-exploring,
title = {{Exploring Local Norms in Exp-Concave Statistical Learning}},
author = {Puchkin, Nikita and Zhivotovskiy, Nikita},
booktitle = {Conference on Learning Theory},
year = {2023},
pages = {1993-2013},
volume = {195},
url = {https://mlanthology.org/colt/2023/puchkin2023colt-exploring/}
}