Efficient Improper Learning for Online Logistic Regression

Abstract

We consider the setting of online logistic regression and consider the regret with respect to the $\ell_2$-ball of radius $B$. It is known (see Hazan et al. (2014)) that any proper algorithm which has logarithmic regret in the number of samples (denoted $n$) necessarily suffers an exponential multiplicative constant in $B$. In this work, we design an efficient improper algorithm that avoids this exponential constant while preserving a logarithmic regret. Indeed, Foster et al. (2018) showed that the lower bound does not apply to improper algorithms and proposed a strategy based on exponential weights with prohibitive computational complexity. Our new algorithm based on regularized empirical risk minimization with surrogate losses satisfies a regret scaling as $O(B\log(Bn))$ with a per-round time-complexity of order $O(d^2 + \log(n))$.

Cite

Text

Jézéquel et al. "Efficient Improper Learning for Online Logistic Regression." Conference on Learning Theory, 2020.

Markdown

[Jézéquel et al. "Efficient Improper Learning for Online Logistic Regression." Conference on Learning Theory, 2020.](https://mlanthology.org/colt/2020/jezequel2020colt-efficient/)

BibTeX

@inproceedings{jezequel2020colt-efficient,
  title     = {{Efficient Improper Learning for Online Logistic Regression}},
  author    = {Jézéquel, Rémi and Gaillard, Pierre and Rudi, Alessandro},
  booktitle = {Conference on Learning Theory},
  year      = {2020},
  pages     = {2085-2108},
  volume    = {125},
  url       = {https://mlanthology.org/colt/2020/jezequel2020colt-efficient/}
}