Open Problem: Better Bounds for Online Logistic Regression

Abstract

Known algorithms applied to online logistic regression on a feasible set of \emphL_2 diameter \emphD achieve regret bounds like \emphO(\emphe^D log \emphT) in one dimension, but we show a bound of \emphO(√\emphD + log \emphT) is possible in a binary 1-dimensional problem. Thus, we pose the following question: Is it possible to achieve a regret bound for online logistic regression that is \emphO(poly(\emphD) log(\emphT))? Even if this is not possible in general, it would be interesting to have a bound that reduces to our bound in the one-dimensional case.

Cite

Text

McMahan and Streeter. "Open Problem: Better Bounds for Online Logistic Regression." Proceedings of the 25th Annual Conference on Learning Theory, 2012.

Markdown

[McMahan and Streeter. "Open Problem: Better Bounds for Online Logistic Regression." Proceedings of the 25th Annual Conference on Learning Theory, 2012.](https://mlanthology.org/colt/2012/mcmahan2012colt-open/)

BibTeX

@inproceedings{mcmahan2012colt-open,
  title     = {{Open Problem: Better Bounds for Online Logistic Regression}},
  author    = {McMahan, H. Brendan and Streeter, Matthew},
  booktitle = {Proceedings of the 25th Annual Conference on Learning Theory},
  year      = {2012},
  pages     = {44.1-44.3},
  volume    = {23},
  url       = {https://mlanthology.org/colt/2012/mcmahan2012colt-open/}
}