Fast Rates with High Probability in Exp-Concave Statistical Learning
Abstract
We present an algorithm for the statistical learning setting with a bounded exp-concave loss in d dimensions that obtains excess risk $O(d \log(1/δ)/n)$ with probability $1 - δ$. The core technique is to boost the confidence of recent in-expectation O(d/n) excess risk bounds for empirical risk minimization (ERM), without sacrificing the rate, by leveraging a Bernstein condition which holds due to exp-concavity. We also show that a regret bound for any online learner in this setting translates to a high probability excess risk bound for the corresponding online-to-batch conversion of the online learner. Lastly, we present high probability bounds for the exp-concave model selection aggregation problem that are quantile-adaptive in a certain sense. One bound obtains a nearly optimal rate without requiring the loss to be Lipschitz continuous, and another requires Lipschitz continuity but obtains the optimal rate.
Cite
Text
Mehta. "Fast Rates with High Probability in Exp-Concave Statistical Learning." International Conference on Artificial Intelligence and Statistics, 2017.Markdown
[Mehta. "Fast Rates with High Probability in Exp-Concave Statistical Learning." International Conference on Artificial Intelligence and Statistics, 2017.](https://mlanthology.org/aistats/2017/mehta2017aistats-fast/)BibTeX
@inproceedings{mehta2017aistats-fast,
title = {{Fast Rates with High Probability in Exp-Concave Statistical Learning}},
author = {Mehta, Nishant A.},
booktitle = {International Conference on Artificial Intelligence and Statistics},
year = {2017},
pages = {1085-1093},
url = {https://mlanthology.org/aistats/2017/mehta2017aistats-fast/}
}